TWI822792B - Method of characterizing activity in an artificial nerual network, and system comprising one or more computers operable to perform said method - Google Patents

Method of characterizing activity in an artificial nerual network, and system comprising one or more computers operable to perform said method Download PDF

Info

Publication number
TWI822792B
TWI822792B TW108119813A TW108119813A TWI822792B TW I822792 B TWI822792 B TW I822792B TW 108119813 A TW108119813 A TW 108119813A TW 108119813 A TW108119813 A TW 108119813A TW I822792 B TWI822792 B TW I822792B
Authority
TW
Taiwan
Prior art keywords
neural network
activity
artificial neural
patterns
input
Prior art date
Application number
TW108119813A
Other languages
Chinese (zh)
Other versions
TW202001693A (en
Inventor
亨利 馬克拉姆
藍 利维
凱瑟琳 潘蜜拉 波樂華樂德
Original Assignee
瑞士商Inait公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/004,635 external-priority patent/US20190378007A1/en
Priority claimed from US16/004,671 external-priority patent/US11972343B2/en
Priority claimed from US16/004,796 external-priority patent/US20190378000A1/en
Priority claimed from US16/004,837 external-priority patent/US11663478B2/en
Priority claimed from US16/004,757 external-priority patent/US11893471B2/en
Application filed by 瑞士商Inait公司 filed Critical 瑞士商Inait公司
Publication of TW202001693A publication Critical patent/TW202001693A/en
Application granted granted Critical
Publication of TWI822792B publication Critical patent/TWI822792B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for characterizing activity in a recurrent artificial neural network and encoding and decoding information. In one aspect, a method can include characterizing activity in an artificial neural network. The method is performed by data processing apparatus and can include identifying clique patterns of activity of the artificial neural network. The clique patterns of activity can enclose cavities.

Description

將人工神經網路中之活動特徵化之方法及包含一或多個可執行該方法之計算機之系統Methods for characterizing activity in artificial neural networks and systems including one or more computers capable of executing the methods

本揭露涉及遞迴人工神經網路中的活動特徵化。本揭露還涉及資訊的編碼及解碼,以及在各種環境中使用已編碼資訊的系統及技術。 The present disclosure relates to activity characterization in recurrent artificial neural networks. This disclosure also relates to the encoding and decoding of information, as well as systems and techniques for using encoded information in a variety of environments.

本揭露涉及遞迴人工神經網路中的活動特徵化。活動特徵化可應用於例如決策時刻(decision moments)的識別,以及應用於如傳輸、加密、資料儲存等環境中的訊息的編碼/解碼。本揭露還涉及資訊的編碼及解碼,以及在各種環境中使用已編碼資訊的系統及技術。該編碼資訊可表示神經網路中的活動,例如遞迴神經網路。 The present disclosure relates to activity characterization in recurrent artificial neural networks. Activity characterization can be applied, for example, to the identification of decision moments, and to the encoding/decoding of messages in environments such as transmission, encryption, data storage, etc. This disclosure also relates to the encoding and decoding of information, as well as systems and techniques for using encoded information in a variety of environments. This encoded information can represent activity in a neural network, such as a recurrent neural network.

人工神經網路是受生物神經元網路的結構及功能方面所啟發的裝置。尤其人工神經網路使用稱為節點的互連構造系統來模擬生物神經元網路的資訊編碼與其他處理能力。人工神經網路中節點之間的連接分布及連接強度決定了人工神經網路的資訊處理或資訊儲存的結果。 Artificial neural networks are devices inspired by the structural and functional aspects of biological neuronal networks. In particular, artificial neural networks use a system of interconnected structures called nodes to simulate the information encoding and other processing capabilities of biological neuronal networks. The connection distribution and connection strength between nodes in an artificial neural network determine the results of information processing or information storage of the artificial neural network.

可透過訓練神經網路以在網路內產生理想的訊號流並且實 現理想的資訊處理或資訊儲存結果。通常在學習階段中,訓練神經網路將會改變節點之間的連接分布及/或連接強度。當神經網路對於特定的輸入組有辦法輸出足夠合適的處理結果時,可將該神經網路視為已訓練完畢。 Neural networks can be trained to generate ideal signal flows within the network and implement Achieve ideal information processing or information storage results. Typically during the learning phase, training a neural network will change the distribution and/or strength of connections between nodes. A neural network is considered to have been trained when it is able to output sufficiently suitable processing results for a specific set of inputs.

人工神經網路可用在各種不同的裝置中以執行非線性資料處理及分析。非線性資料處理不滿足疊加原理(superposition principle),亦即,欲確定的變量無法被寫為獨立分量的線性總和。非線性資料處理在特定的環境中是有用的,例如:模式與序列辨識(pattern and sequence recognition)、語音處理、新穎性檢測及順序決策、複雜系統建模以及各種其他環境中的系統和技術。 Artificial neural networks can be used in a variety of devices to perform nonlinear data processing and analysis. Nonlinear data processing does not satisfy the superposition principle, that is, the variable to be determined cannot be written as a linear sum of independent components. Nonlinear data processing is useful in specific contexts, such as pattern and sequence recognition, speech processing, novelty detection and sequential decision-making, complex systems modeling, and systems and techniques in a variety of other contexts.

編碼和解碼都是將資訊從一種形式或表示轉換為另一種形式或呈現方式。不同的呈現方式可提供在不同應用程式中較多較少有用的不同特徵。例如,一些形式或資訊呈現方式(例如:自然語言)可更容易讓人類理解。一些其他的形式或呈現方式則可具有更小的大小(例如:壓縮過的),並且更方便傳輸或儲存。另一些其他的形式或呈現方式則可有意地模糊資訊內容(例如:資訊可被加密編碼)。 Encoding and decoding both convert information from one form or representation to another. Different presentation methods provide different features that are more or less useful in different applications. For example, some forms or information presentation methods (such as natural language) can be easier for humans to understand. Some other forms or presentations may be smaller in size (eg, compressed) and more convenient for transmission or storage. Other forms or presentation methods can intentionally obscure information content (for example, information can be encrypted and encoded).

無論具體的應用方式如何,編碼或解碼的過程通常將遵循預設的一組規則或演算法,而該組規則或演算法可建立在不同的形式或呈現方式中的資訊之間的對應關係。舉例而言,產生二元(即,二進位制,binary)碼的編碼過程可根據每個位元在二元序列或向量中的位置而為其分配角色或意義。 Regardless of the specific application method, the encoding or decoding process will usually follow a preset set of rules or algorithms, and this set of rules or algorithms can establish correspondences between information in different forms or presentation methods. For example, an encoding process that produces a binary (ie, binary) code may assign a role or meaning to each bit based on its position in a binary sequence or vector.

本揭露敘述關於人工神經網路中的活動特徵化的技術。 This disclosure describes techniques for characterization of activity in artificial neural networks.

舉例而言,於一種實施方式中,一方法可包含將一人工神經網路中的活動特徵化。該方法是由一資料處理裝置所執行,並且可包含識別該人工神經網路中的活動的多個團模式(clique pattern)。活動的該等團模式可包圍多個空腔(cavities)。 For example, in one embodiment, a method may include characterizing activity in an artificial neural network. The method is performed by a data processing device and may include identifying clique patterns of activity in the artificial neural network. The active group patterns may surround multiple cavities.

此實施方式及其他實施方式可包含以下特徵當中的一或多個。該方法可包含定義多個時間窗(window of time),且該人工神經網路之該活動於該等時間窗期間響應於該人工神經網路之一輸入。活動之該等團模式可於該等時間窗的各個中被識別。該方法可包含基於發生於該等時間窗中之一第一時間窗期間之活動之該等團模式之一可區分機率(distinguishable likelihood)識別該第一時間窗。識別該等團模式可包含識別活動之多個有向團(directed clique)。存在於更高維度之有向團中之低維度之有向團可被捨棄或忽略。 This and other implementations may include one or more of the following features. The method may include defining windows of time, and the activity of the artificial neural network being responsive to an input to the artificial neural network during the time windows. The group patterns of activity can be identified in each of the time windows. The method may include identifying the first time window based on a distinguishable likelihood of the clique pattern of activity occurring during the first time window of the time windows. Identifying such clique patterns may include identifying multiple directed cliques of activity. Lower-dimensional directed groups existing in higher-dimensional directed groups can be discarded or ignored.

該方法可包含區分該等團模式為多個類別,以及根據該等團模式於各該類別之出現次數將該活動特徵化。區分該等團模式可包含根據各該團模式中之一點數量(a number of points)來區分該等團模式。該方法可包含從該遞迴人工神經網路輸出由數字零與數字一所組成之一二元序列。該二元序列中之每一數字可表示該人工神經網路中是否存在相應之一活動模式。該方法可包含透過讀取該人工神經網路所輸出之數字以及演化(evolving)該人工神經網路之一結構以建構該人工神經網路。可透過迭代地更改該人工神經網路之該結構、將更改之該結構中活動之模式之複雜度特徵化、以及使用該模式之該複雜度之該特徵化來作為更改之該結構是否理想之一指標來演化該結構。 The method may include distinguishing the group modes into multiple categories, and characterizing the activity based on the number of occurrences of the group modes in each category. Distinguishing the clique modes may include distinguishing the clique modes based on a number of points in each clique mode. The method may include outputting from the recurrent artificial neural network a binary sequence consisting of the number zero and the number one. Each number in the binary sequence can indicate whether a corresponding activity pattern exists in the artificial neural network. The method may include constructing the artificial neural network by reading numbers output by the artificial neural network and evolving a structure of the artificial neural network. Whether the altered structure is ideal can be determined by iteratively altering the structure of the artificial neural network, characterizing the complexity of the pattern of activity in the altered structure, and using the characterization of the complexity of the pattern. An indicator to evolve the structure.

該人工神經網路可為一遞迴人工神經網路。該方法可包含基於確認在該遞迴人工神經網路中活動之模式之複雜度來識別在該遞迴人工神經網路中之多個決策時刻。識別該等決策時刻可包含:確認活動之一時間點,且該活動相較於響應於該輸入之其他活動具有可區分(distinguishable)之一複雜度;以及基於具有可區分之該複雜度之該活動之該時間點來識別該等決策時刻。該方法可包含:將一資料流(data stream)輸入至該遞迴人工神經網路;以及於輸入該資料流時識別活動之該等團模式。該方法可包含評估該活動是否響應於該人工神經網路之該輸入。評估該活動是否響應於該人工神經網路之該輸入可包含:評估在該輸入事件後相對較早且相對較簡單之活動模式響應於該輸入,而在該輸入事件後相對較早且相對較複雜之活動模式不響應於該輸入;以及評估在該輸入事件後相對較晚且相對較複雜之活動模式響應於該輸入,而在該輸入事件後相對較早且相對較複雜之活動模式不響應於該輸入。 The artificial neural network may be a recursive artificial neural network. The method may include identifying multiple decision moments in the recurrent artificial neural network based on the complexity of identifying patterns of activity in the recurrent artificial neural network. Identifying the decision moments may include: identifying a point in time of an activity that has a distinguishable complexity compared to other activities in response to the input; and based on the distinguishable complexity of the activity. This time point in the activity is used to identify these decision-making moments. The method may include: inputting a data stream into the recurrent artificial neural network; and identifying the clique patterns of activity when inputting the data stream. The method may include evaluating whether the activity is responsive to the input to the artificial neural network. Evaluating whether the activity is responsive to the input to the artificial neural network may include evaluating whether a pattern of activity relatively early and relatively simple after the input event is responsive to the input, and whether a pattern of activity relatively early and relatively simple after the input event is responsive to the input. A complex activity pattern is not responsive to the input; and a relatively late and relatively complex activity pattern is evaluated to be responsive to the input, while a relatively early and relatively complex activity pattern is not responsive to the input event. to this input.

於另一種實施方式中,一系統可包含一或多個計算機,該一或多個計算機可執行多個運算。該等運算可包含將該人工神經網路中之活動特徵化,並且識別該人工神經網路中活動之多個團模式,其中活動之該等團模式包圍多個空腔。該等運算可包含定義多個時間窗,該人工神經網路之活動於該等時間窗期間響應於該人工神經網路之一輸入。活動之該等團模式可於該等時間窗中被識別。該等運算可包含基於活動之該等團模式之一可區分機率識別該等時間窗中之一第一時間窗,且活動之該等團模式是發生於該第一時間窗。識別該等團模式可包含捨棄或忽略存在於更高維度之有向團中之低維度之有向團。該等運算可包含建構該人工神經網路,且建構 該人工神經網路包含:讀取該人工神經網路所輸出之數字;以及演化該人工神經網路之一結構。可透過迭代地更改該人工神經網路之該結構、將該結構中之該等活動模式之複雜度特徵化、以及將針對該模式之複雜度之該特徵化用以指示所更改之該結構是否理想以演化該結構。該人工神經網路可為一遞迴人工神經網路。該方法可包含基於確認該遞迴人工神經網路中之活動模式之複雜度來識別該遞迴人工神經網路中之多個決策時刻。識別該等決策時刻可包含:確認一活動之一時間點,該活動具有相較於其他響應於輸入之活動為可區分之一複雜度;以及基於具有可區分之該複雜度之該活動之該時間點來識別該等決策時刻。該等運算可包含:將一資料流輸入至該遞迴人工至神經網路;以及於輸入該資料流時識別活動之該等團模式。該等運算可包含評估該活動是否響應於該人工神經網路之該輸入。評估該活動是否響應於該人工神經網路之該輸入可包含:評估在該輸入之時刻後相對較早且相對較簡單之活動模式響應於該輸入,而在該輸入之時刻後相對較早且相對較複雜之活動模式不響應於該輸入;以及評估在該輸入之時刻後相對較晚且相對較複雜之活動模式響應於該輸入,而在該輸入之時刻後相對較早且相對較複雜之活動模式不響應於該輸入。 In another implementation, a system may include one or more computers that may perform multiple operations. The operations may include characterizing activity in the artificial neural network and identifying clique patterns of activity in the artificial neural network, where the clique patterns of activity surround cavities. The operations may include defining time windows during which the activity of the artificial neural network is responsive to an input to the artificial neural network. The group patterns of activity can be identified within the time windows. The operations may include identifying a first time window of the time windows based on a distinguishable probability of the clique pattern of activity, and the clique pattern of activity occurs in the first time window. Identifying such clique patterns may include discarding or ignoring lower dimensional directed cliques that exist within higher dimensional directed cliques. The operations may include constructing the artificial neural network, and constructing The artificial neural network includes: reading the numbers output by the artificial neural network; and evolving a structure of the artificial neural network. By iteratively changing the structure of the artificial neural network, characterizing the complexity of the activity patterns in the structure, and using the characterization of the complexity of the patterns to indicate whether the changed structure ideal to evolve the structure. The artificial neural network may be a recursive artificial neural network. The method may include identifying a plurality of decision moments in the recurrent artificial neural network based on identifying the complexity of activity patterns in the recurrent artificial neural network. Identifying the decision moments may include: identifying a point in time of an activity that has a distinguishable complexity compared to other activities that respond to the input; and based on the activity having the distinguishable complexity point in time to identify these decision-making moments. The operations may include inputting a data stream into the recurrent artificial neural network and identifying clique patterns of activity when inputting the data stream. The operations may include evaluating whether the activity is responsive to the input to the artificial neural network. Evaluating whether the activity is responsive to the input to the artificial neural network may include evaluating a pattern of activity that is relatively early and relatively simple after the time of the input in response to the input, and that is relatively early and relatively early after the time of the input. A relatively complex activity pattern is not responsive to the input; and a relatively late and relatively complex activity pattern is evaluated relatively late after the time of the input, whereas a relatively early and relatively complex activity pattern is evaluated relatively early after the time of the input. Active mode does not respond to this input.

作為另一示例,一種識別一人工神經網路中之多個決策時刻之方法包含:確認在該遞迴人工神經網路中活動之模式之複雜度,其中該活動響應於該人工神經網路之一輸入確認活動之一時間點;確認活動之一時間點,該活動相較於響應於該輸入之其他活動具有可區分之一複雜度;以及基於具有可區分之該複雜度之該活動之該時間點來識別該等決策時刻。 As another example, a method of identifying multiple decision moments in an artificial neural network includes identifying the complexity of patterns of activity in the recurrent artificial neural network, where the activity is responsive to the artificial neural network's An input identifies a point in time of an activity; identifies a point in time of an activity that has a distinguishable complexity compared to other activities that respond to the input; and the activity based on the activity that has the distinguishable complexity point in time to identify these decision-making moments.

作為另一示例,一種將一遞迴人工神經網路中之活動特徵化 之方法包含識別遞迴人工神經網路中之活動之預先定義的多個團模式。該方法由一資料處理裝置所執行。作為另一示例,該方法可包含從一遞迴人工神經網路輸出由數字零與數字一所組成之一二元序列,其中該二元序列中之每一數字表示該遞迴人工神經網路中之一特定節點組是否顯示相應之一活動模式。 As another example, a method for characterizing activity in a recurrent artificial neural network The method involves identifying predefined multiple clique patterns of activity in recurrent artificial neural networks. The method is executed by a data processing device. As another example, the method may include outputting from a recurrent artificial neural network a binary sequence consisting of the number zero and the number one, wherein each number in the binary sequence represents the recurrent artificial neural network Whether a specific node group in a node displays a corresponding activity mode.

作為另一示例,一種建構一遞迴人工神經網路之方法可包含:將於該遞迴人工神經網路中可能會出現之活動之多個模式之一複雜度特徵化,該遞迴人工神經網路包含一結構化的節點集合及節點之間的連結;以及演化該遞迴人工神經網路之一結構,以增加活動之多個模式之該複雜度。舉例而言,該建構方法也可被應用成一種訓練該遞迴人工神經網路之方法之一部分。 As another example, a method of constructing a recurrent artificial neural network may include characterizing the complexity of one of multiple patterns of activity that may occur in the recurrent artificial neural network. The network consists of a structured set of nodes and the connections between nodes; and the structure of the recurrent artificial neural network is evolved to increase the complexity of multiple modes of activity. For example, the construction method can also be applied as part of a method for training the recurrent artificial neural network.

上述實施方式之其他實施例包含相應之系統、裝置及計算機程式,該系統、裝置及計算機用以執行編寫於計算機儲存裝置上之該方法之多個步驟。 Other examples of the above embodiments include corresponding systems, devices, and computer programs for executing steps of the method written on a computer storage device.

本揭露中特定之實施例可被實作以實現如下所述之一或多個優點。舉例而言,傳統資料處理裝置(例如:數位計算機及其他計算機)處理資訊時是被編程以遵守一預先定義之邏輯序列。當一計算機運算出一結果時相對地容易識別。亦即,編程中所體現之邏輯序列之完成表示資訊處理也已完成,且該計算機已「做出決策(decision)」。該結果可以一種相對長久的形式(例如:一記憶體裝置、一組緩衝器等)被保存於該計算機之資料處理器之輸出端,並且可出於多種目的而被存取。 Certain embodiments of the disclosure may be implemented to achieve one or more of the advantages described below. For example, traditional data processing devices (such as digital computers and other computers) are programmed to follow a predefined logical sequence when processing information. It is relatively easy to recognize when a computer operation produces a result. That is, the completion of the logical sequence embodied in programming indicates that the information processing has been completed and the computer has "made a decision." The results may be stored in a relatively permanent form (eg, a memory device, a set of buffers, etc.) at the output of the computer's data processor and may be accessed for a variety of purposes.

相反地,如本文所述,可基於資訊處理期間神經網路的動態 特性之特徵來識別遞迴人工神經網路中之決策時刻。可基於在進行資訊處理時人工神經網路之功能狀態(functional states)的特徵而識別人工神經網路中的決策時刻,而非透過等待人工神經網路到達邏輯序列中預先定義的末端來識別。 Instead, as described in this article, based on the dynamics of neural networks during information processing Characteristics of characteristics to identify decision moments in recurrent artificial neural networks. Decision-making moments in an artificial neural network can be identified based on the characteristics of the functional states of the artificial neural network while performing information processing, rather than by waiting for the artificial neural network to reach a predefined end in a logical sequence.

除此之外,在資訊處理的過程中遞迴人工神經網路的動態特性(dynamic property)的特徵(包含與團模式及有向團模式相符的活動特徵)可用於各種信令(signalling)運算,包含訊號的傳輸、編碼、加密及儲存。尤其在資訊處理期間,遞迴人工神經網路中的活動的特徵反映了輸入,並且可被視為輸入的一種編碼形式(亦即,編碼過程中的遞迴人工神經網路的「輸出」)。舉例而言,這些特徵可被發送到遠端接收器,該遠端接收器可解碼該特徵以重構輸入或輸入的一部分。 In addition, the dynamic properties of recurrent artificial neural networks (including activity characteristics consistent with clique mode and directed clique mode) during information processing can be used in various signaling operations. , including signal transmission, encoding, encryption and storage. Particularly during information processing, the characteristics of activity in a RANN reflect the input and can be viewed as an encoded form of the input (i.e., the "output" of the RANN during the encoding process) . For example, these features can be sent to a remote receiver, which can decode the features to reconstruct the input or a portion of the input.

除此之外,在某些情況下,遞迴人工神經網路中不同節點組中的活動(例如:與團模式及有向團模式一致的活動)可表示為「0」和「1」的二元序列,當中的每個數字指示活動是否符合一模式。由於活動在某些情況下可以是遞迴人工神經網路的輸出,因此,遞迴人工神經網路的輸出可被表示為二進位數字的向量並且與數位資料處理相容。 In addition, in some cases, activities in different node groups in a recurrent artificial neural network (for example, activities consistent with clique patterns and directed clique patterns) can be expressed as "0" and "1" A binary sequence in which each number indicates whether the activity conforms to a pattern. Since activity can in some cases be the output of a recurrent artificial neural network, the output of a recurrent artificial neural network can be represented as a vector of binary numbers and is compatible with digital data processing.

除此之外,在某些情況下,可在訓練之前及/或訓練期間使用這種對遞迴人工神經網路的動態特性的特徵化,以增加在資訊處理期間複雜活動模式出現的機率。例如,在訓練之前或在訓練期間可有意地演化遞迴神經網路中節點之間的連結以增加活動模式的複雜度。例如,可有意地演化遞迴人工神經網路中節點之間的連結,以增加在資訊處理期間出現團模式及有向團模式的機率。如此,可減少訓練遞迴人工神經網路所需的時間及 精力。 In addition, in some cases, this characterization of the dynamics of recurrent artificial neural networks can be used before and/or during training to increase the probability of the emergence of complex activity patterns during information processing. For example, the connections between nodes in a recurrent neural network can be intentionally evolved before or during training to increase the complexity of the activity pattern. For example, the connections between nodes in a recursive artificial neural network can be intentionally evolved to increase the probability of clique and directed clique patterns appearing during information processing. In this way, the time required to train recurrent artificial neural networks can be reduced and energy.

作為另一示例,此種對遞迴人工神經網路的動態特性的特徵化可用於確認遞迴神經網路的訓練完整程度。舉例而言,在活動中顯示特定類型的排序(例如:團模式及有向團模式)的遞迴人工神經網路可被視為比不顯示該特定類型排序的遞迴人工神經網路訓練得更深入。實際上,在某些情況下,可透過量化遞迴人工神經網路中活動的排序程度來量化訓練的程度。 As another example, this characterization of the dynamics of a recurrent artificial neural network can be used to confirm how well the recurrent neural network is trained. For example, a recurrent artificial neural network that displays a specific type of ordering in its activities (e.g., clique pattern and directed clique pattern) can be considered to be better trained than a recursive artificial neural network that does not display that particular type of ordering. deeper. Indeed, in some cases the extent of training can be quantified by quantifying the degree of ordering of activities in a recurrent artificial neural network.

例如,用於識別神經網路中的多個決策時刻的方法包含:確認遞迴人工神經網路中的活動的模式的複雜度,其中該活動響應於該遞迴人工神經網路的輸入;確認相較於響應於該輸入的其他活動具有可區分複雜度的活動的一時間點;以及基於具有可區分複雜度的該活動的該時間點來識別該等決策時刻。 For example, methods for identifying multiple decision moments in a neural network include: identifying the complexity of patterns of activity in a recurrent artificial neural network in response to inputs to the recurrent artificial neural network; identifying a point in time of the activity having a distinguishable complexity compared to other activities responsive to the input; and identifying the decision moments based on the point in time of the activity having a distinguishable complexity.

作為另一示例,用於特徵化一遞迴人工神經網路中的活動的方法包含識別遞迴人工神經網路的活動的團模式。該方法由資料處理裝置所執行。 As another example, a method for characterizing activity in a recurrent artificial neural network includes identifying clique patterns of activity in the recurrent artificial neural network. The method is executed by the data processing device.

作為另一示例,一方法可包含從遞迴人工神經網路輸出零和一的二元序列,其中序列中的每個數字表示遞迴人工神經網路中的特定節點組是否顯示相應的活動模式。 As another example, a method may include outputting a binary sequence of zeros and ones from a recurrent artificial neural network, where each number in the sequence represents whether a particular group of nodes in the recurrent artificial neural network displays a corresponding pattern of activity .

作為另一示例,建構遞迴人工神經網路的方法可包含:將在遞迴人工神經網路中可能出現的活動的模式的複雜度特徵化,該遞迴人工神經網路包含了當中包含多個節點與該等節點間的連結的一結構化集合;以及演化遞迴人工神經網路的結構,以增加活動的模式的複雜度。此種建構 方法也可用作例如訓練遞迴人工神經網路的方法的一部分。 As another example, a method of constructing a recurrent artificial neural network may include characterizing the complexity of patterns of activity that may occur in a recurrent artificial neural network that contains multiple a structured collection of nodes and the links between the nodes; and evolving the structure of a recurrent artificial neural network to increase the complexity of patterns of activity. this construction The method may also be used as part of a method for training a recurrent artificial neural network, for example.

該等實施方式的其他實施例包含相應的系統、裝置以及計算機程式,該計算機程式被配置以執行被編碼在計算機儲存裝置上的方法的步驟。 Other examples of the implementations include corresponding systems, devices, and computer programs configured to perform the steps of a method encoded on a computer storage device.

可透過實作本揭露中的特定實施例以實現以下優點中的一或多個。舉例而言,傳統的資料處理裝置(例如:數位與其他種類的計算機),被編程為在處理資訊時遵循預定義的邏輯序列。因此,計算機運算出結果的時刻相對地容易識別。亦即,編程中所包含的邏輯序列的完成指示了資訊處理完成的時間,且表示計算機已「做出決策」。計算機所運算出的結果可以壽命相對較長的形式保存在計算機資料處理器的輸出端(例如:一儲存器裝置、一組緩衝器等),並且出於各種目的而被訪問。 One or more of the following advantages may be realized by implementing specific embodiments of the present disclosure. For example, traditional data processing devices, such as digital and other types of computers, are programmed to follow predefined logical sequences when processing information. Therefore, the moment when the computer calculates the result is relatively easy to identify. That is, the completion of the logical sequence contained in the programming indicates when information processing is complete and indicates that the computer has "made a decision." The results calculated by the computer can be stored in a relatively long-lived form at the output of the computer data processor (eg, a storage device, a set of buffers, etc.) and accessed for various purposes.

相反地,如本文所述,可基於資訊處理期間神經網路的動態特性之特徵來識別遞迴人工神經網路中之決策時刻。可基於在進行資訊處理時人工神經網路之功能狀態的特徵而識別人工神經網路中的決策時刻,而非透過等待人工神經網路到達邏輯序列中預先定義的末端來識別。 Conversely, as described in this article, decision moments in recursive artificial neural networks can be identified based on characteristics of the neural network's dynamic properties during information processing. Decision-making moments in an artificial neural network can be identified based on characteristics of the functional state of the artificial neural network when information processing is performed, rather than by waiting for the artificial neural network to reach a predefined end of a logical sequence.

除此之外,在資訊處理的過程中遞迴人工神經網路的動態特性的特徵(包含與團模式及有向團模式相符的活動特徵)可用於各種信令運算,包含訊號的傳輸、編碼、加密及儲存。尤其在資訊處理期間,遞迴人工神經網路中的活動的特徵反映了輸入,並且可被視為輸入的一種編碼形式(亦即,編碼過程中的遞迴人工神經網路的「輸出」)。舉例而言,這些特徵可被發送到遠端接收器,該遠端接收器可解碼該特徵以重構輸入或輸入的一部分。 In addition, the dynamic characteristics of recurrent artificial neural networks during information processing (including activity characteristics consistent with clique mode and directed clique mode) can be used in various signaling operations, including signal transmission and encoding. , encryption and storage. Particularly during information processing, the characteristics of activity in a RANN reflect the input and can be viewed as an encoded form of the input (i.e., the "output" of the RANN during the encoding process) . For example, these features can be sent to a remote receiver, which can decode the features to reconstruct the input or a portion of the input.

除此之外,在某些情況下,遞迴人工神經網路中不同節點組中的活動(例如:與團模式及有向團模式一致的活動)可表示為「0」和「1」的二元序列,當中的每個數字指示活動是否符合一模式。由於活動在某些情況下可以是遞迴人工神經網路的輸出,因此,遞迴人工神經網路的輸出可被表示為二進位數字的向量並且與數位資料處理相容。 In addition, in some cases, activities in different node groups in a recurrent artificial neural network (for example, activities consistent with clique patterns and directed clique patterns) can be expressed as "0" and "1" A binary sequence in which each number indicates whether the activity conforms to a pattern. Since activity can in some cases be the output of a recurrent artificial neural network, the output of a recurrent artificial neural network can be represented as a vector of binary numbers and is compatible with digital data processing.

除此之外,在某些情況下,可在訓練之前及/或訓練期間使用這種對遞迴人工神經網路的動態特性的特徵化,以增加在資訊處理期間複雜活動模式出現的機率。例如,在訓練之前或在訓練期間可有意地演化遞迴神經網路中節點之間的連結以增加活動模式的複雜度。例如,可有意地演化遞迴人工神經網路中節點之間的連結,以增加在資訊處理期間出現團模式及有向團模式的機率。如此,可減少訓練遞迴人工神經網路所需的時間及精力。 In addition, in some cases, this characterization of the dynamics of recurrent artificial neural networks can be used before and/or during training to increase the probability of the emergence of complex activity patterns during information processing. For example, the connections between nodes in a recurrent neural network can be intentionally evolved before or during training to increase the complexity of the activity pattern. For example, the connections between nodes in a recursive artificial neural network can be intentionally evolved to increase the probability of clique and directed clique patterns appearing during information processing. In this way, the time and effort required to train recurrent artificial neural networks can be reduced.

作為另一示例,此種對遞迴人工神經網路的動態特性的特徵化可用於確認遞迴神經網路的訓練完整程度。舉例而言,在活動中顯示特定類型的排序(例如:團模式及有向團模式)的遞迴人工神經網路可被視為比不顯示該特定類型排序的遞迴人工神經網路訓練得更深入。實際上,在某些情況下,可透過量化遞迴人工神經網路中活動的排序程度來量化訓練的程度。 As another example, this characterization of the dynamics of a recurrent artificial neural network can be used to confirm how well the recurrent neural network is trained. For example, a recurrent artificial neural network that displays a specific type of ordering in its activities (e.g., clique pattern and directed clique pattern) can be considered to be better trained than a recursive artificial neural network that does not display that particular type of ordering. deeper. Indeed, in some cases the extent of training can be quantified by quantifying the degree of ordering of activities in a recurrent artificial neural network.

作為另一示例,於一實施方式中,一裝置包含一神經網路,該神經網路被訓練為響應於一第一輸入而產生一活動的模式當中的拓撲結構的一第一表示(representation)的一近似,其中該活動是在一源神經網路(source neural network)中響應於該第一輸入而出現。該神經網路也被訓練 為響應於一第二輸入而產生一活動的模式當中的拓撲結構的一第二表示的一近似,其中該活動是在該源神經網路中響應於該第二輸入而出現。該神經網路還被訓練為響應於一第三輸入而產生一活動的模式當中的拓撲結構的一第三表示的一近似,其中該活動是在該源神經網路中響應於該第三輸入而出現。 As another example, in one embodiment, a device includes a neural network trained to generate a first representation of topology in a pattern of activity in response to a first input. An approximation of , where the activity occurs in response to the first input in a source neural network. The neural network is also trained An approximation of a second representation of a topology in a pattern that produces an activity in the source neural network in response to the second input. The neural network is further trained to generate an approximation of a third representation of topology in a pattern of activity in response to a third input, wherein the activity is in the source neural network in response to the third input And appear.

該實施方式及其他實施方式可包含以下特徵中的一或多個。拓撲結構可全部包含源神經網路中的二個或更多個節點以及節點之間的一或多個邊。拓撲結構可包含單體(simplices)。拓撲結構可包圍空腔。該第一表示、該第二表示以及該第三表示中的每一者可表示在源神經網路中出現的拓撲結構,且該拓撲結構僅出現於活動的該等模式具有可區分於響應於各自的輸入的其他活動的複雜度的時候。該裝置還可包含一處理器,該處理器被耦合以接收由神經網路裝置所產生的表示的近似,並且處理所接收的該近似。處理器可包含第二神經網路,且該第二神經網路已受過訓練以處理由神經網路產生的表示。該第一表示、該第二表示以及該第三表示中的每一者可包含多值(multi-valued)且非二元的數字。該第一表示、該第二表示以及該第三表示中的每一者可表示拓撲結構的出現,而不指定源神經網路中活動模式出現的位置。該設備可包含智慧型手機。該源神經網路可以是一遞迴神經網路。 This and other implementations may include one or more of the following features. The topology may consist entirely of two or more nodes in the source neural network and one or more edges between the nodes. Topology can contain simplices. Topology can surround cavities. Each of the first representation, the second representation, and the third representation may represent a topology that occurs in the source neural network, and the topology occurs only when the patterns of activity have characteristics that are distinguishable from responses to the respective inputs to the complexity of other activities. The device may also include a processor coupled to receive an approximation of the representation generated by the neural network device and to process the received approximation. The processor may include a second neural network, and the second neural network has been trained to process the representation generated by the neural network. Each of the first representation, the second representation, and the third representation may include multi-valued and non-binary numbers. Each of the first representation, the second representation, and the third representation may represent the occurrence of a topological structure without specifying where in the source neural network the activity pattern occurs. The device may include a smartphone. The source neural network may be a recurrent neural network.

在另一實施方式中,一裝置包含神經網路,該神經網路被耦合至活動的模式中的拓撲結構的輸入表示,該活動的模式響應於多個不同輸入而在源神經網路中出現。該神經網路被訓練以處理該等表示並產生相應的輸出。 In another embodiment, a device includes a neural network coupled to an input representation of a topology in a pattern of activity that occurs in a source neural network in response to a plurality of different inputs . The neural network is trained to process the representations and produce corresponding outputs.

該實施方式及其他實施方式可包含以下特徵中的一或多個。拓撲結構可全部包含源神經網路中的二個或更多個節點以及節點之間的一或多個邊。拓撲結構可包含單體。拓撲結構的該等表示可表示在源神經網路中出現的拓撲結構,且該拓撲結構僅出現於活動的模式具有可區分於響應於各自的輸入的其他活動的複雜度的時候。該裝置還可包含一神經網路,該神經網路被訓練以響應於多個不同的輸入而產生活動的模式中的多個拓撲結構的多個表示各自的近似,該活動在源神經網路中響應於該等不同的輸入而出現。該等表示可包含多值且非二元的數字。該等表示可表示拓撲結構的出現,而不指定源神經網路中活動模式出現的位置。該源神經網路可以是一遞迴神經網路。 This and other implementations may include one or more of the following features. The topology may consist entirely of two or more nodes in the source neural network and one or more edges between the nodes. Topology can contain monoliths. Such representations of topology may represent topologies that occur in the source neural network only when patterns of activity have a complexity that is distinguishable from other activities in response to respective inputs. The device may also include a neural network trained to generate multiple representations of the multiple topologies in patterns of activity in response to multiple different inputs, the activity being represented in the source neural network appear in response to these different inputs. Such representations may contain multi-valued and non-binary numbers. These representations can represent the occurrence of topological structures without specifying where in the source neural network the pattern of activity occurs. The source neural network may be a recurrent neural network.

在另一實施方式中,一方法由神經網路裝置所實作,並且包含:輸入在源神經網路中的活動的模式的拓撲結構的表示,其中該活動響應於源神經網路的輸入;處理該表示;以及輸出該表示的處理結果。該處理與神經網路的訓練一致,以處理源神經網路中的活動模式中的拓撲結構的不同表示。 In another embodiment, a method is implemented by a neural network device and includes: inputting a representation of a topology of a pattern of activity in a source neural network, wherein the activity is responsive to the input to the source neural network; Process the representation; and output the processing results of the representation. This processing is consistent with the training of the neural network to handle different representations of the topology in the activity patterns in the source neural network.

該實施方式及其他實施方式可包含以下特徵中的一或多個。拓撲結構可全部包含源神經網路中的二個或更多個節點以及節點之間的一或多個邊。拓撲結構可包含單體。拓撲結構可包圍空腔。拓撲結構的該等表示可表示在源神經網路中出現的拓撲結構,且該拓撲結構僅出現於活動的模式具有可區分於響應於各自的輸入的其他活動的複雜度的時候。該等表示可包含多值且非二元的數字。該等表示可表示拓撲結構的出現,而不指定源神經網路中活動模式出現的位置。該源神經網路可以是一遞迴神經網路。 This and other implementations may include one or more of the following features. The topology may consist entirely of two or more nodes in the source neural network and one or more edges between the nodes. Topology can contain monoliths. Topology can surround cavities. Such representations of topology may represent topologies that occur in the source neural network only when patterns of activity have a complexity that is distinguishable from other activities in response to respective inputs. Such representations may contain multi-valued and non-binary numbers. These representations can represent the occurrence of topological structures without specifying where in the source neural network the pattern of activity occurs. The source neural network may be a recurrent neural network.

作為另一示例,在一實施方式中,一裝置包含神經網路,該神經網路被耦合至活動的模式中的拓撲結構的輸入表示,該活動的模式響應於多個不同輸入而在源神經網路中出現。該神經網路被訓練以處理該等表示並產生相應的輸出。 As another example, in one embodiment, a device includes a neural network coupled to an input representation of a topology in a pattern of activity that occurs in a source neural network in response to a plurality of different inputs. appear on the Internet. The neural network is trained to process the representations and produce corresponding outputs.

該實施方式及其他實施方式可包含以下特徵中的一或多個。拓撲結構可全部包含源神經網路中的二個或更多個節點以及節點之間的一或多個邊。該裝置可包含一致動器(actuator),其被耦合以接收來自神經網路的響應輸出並作用於真實或虛擬環境;一感測器,其被耦合以測量環境的特徵;以及一教師模組,其被配置為解釋從感測器接收的測量結果,並且為神經網路提供獎勵(reward)及/或遺憾(regret)。拓撲結構可包含單體。拓撲結構可包圍空腔。拓撲結構的該等表示可表示在源神經網路中出現的拓撲結構,且該拓撲結構僅出現於活動的模式具有可區分於響應於各自的輸入的其他活動的複雜度的時候。該裝置可包含一第二神經網路,該神經網路被訓練以響應於多個不同的輸入而產生活動的模式中的多個拓撲結構的多個表示各自的近似,該活動在源神經網路中響應於該等不同的輸入而出現。該裝置還可包含一致動器,其被耦合以接收來自神經網路的響應輸出並作用於真實或虛擬環境;以及一感測器,其被耦合以測量環境的特徵。可訓練該第二神經網路以至少部分地響應於所測量的環境特徵來產生相應的近似。該裝置還可包含一教師模組,該教師模組被配置為解釋從感測器接收的測量結果,並且為神經網路提供獎勵及/或遺憾。拓撲結構的該等表示可包含多值且非二元的數字。該等表示可表示拓撲結構的出現,而不指定源神經網路中活動模式出現的位置。該裝置可為一智慧型手機。該源神經網路可為 一遞迴神經網路。 This and other implementations may include one or more of the following features. The topology may consist entirely of two or more nodes in the source neural network and one or more edges between the nodes. The device may include an actuator coupled to receive response output from the neural network and act on a real or virtual environment; a sensor coupled to measure characteristics of the environment; and a teacher module , which is configured to interpret measurements received from the sensors and provide rewards and/or regrets to the neural network. Topology can contain monoliths. Topology can surround cavities. Such representations of topology may represent topologies that occur in the source neural network only when patterns of activity have a complexity that is distinguishable from other activities in response to respective inputs. The device may include a second neural network trained to generate approximations of each of the plurality of representations of the plurality of topologies in the pattern of activity in the source neural network in response to the plurality of different inputs. The path appears in response to these different inputs. The device may also include an actuator coupled to receive response output from the neural network and act on a real or virtual environment, and a sensor coupled to measure characteristics of the environment. The second neural network can be trained to produce a corresponding approximation in response, at least in part, to the measured environmental characteristics. The device may also include a teacher module configured to interpret measurements received from the sensors and provide rewards and/or regrets to the neural network. Such representations of topology may contain multi-valued and non-binary numbers. These representations can represent the occurrence of topological structures without specifying where in the source neural network the pattern of activity occurs. The device can be a smart phone. The source neural network can be A return neural network.

在另一實施方式中,由一或多個資料處理裝置實作的方法可包含:接收一訓練集,該訓練集包含一源神經網路中的活動的多個模式中的多個拓撲結構的多個表示;以及透過將該等表示用作該神經網路的一輸入或者用作一目標答案向量(target answer vector)來訓練一神經網路。該活動響應於該源神經網路的一輸入。 In another embodiment, a method implemented by one or more data processing devices may include: receiving a training set containing a plurality of topologies in a plurality of patterns of activity in a source neural network. A plurality of representations; and training a neural network by using the representations as an input to the neural network or as a target answer vector. The activity is responsive to an input to the source neural network.

該實施方式及其他實施方式可包含以下特徵中的一或多個。拓撲結構可全部包含源神經網路中的二個或更多個節點以及節點之間的一或多個邊。該訓練集可包含多個輸入向量,每個輸入向量對應於各該表示。訓練該神經網路可以包含透過將該等表示中的每一個作為目標答案向量來訓練該神經網路。訓練該神經網路可包含透過將該等表示中的每一個作為輸入來訓練該神經網路。訓練集可包含多個獎勵值或遺憾值。訓練該神經網路可包含強化學習(reinforcement learning)。拓撲結構可包含單體。拓撲結構的該等表示可表示在源神經網路中出現的拓撲結構,且該拓撲結構僅出現於活動的模式具有可區分於響應於各自的輸入的其他活動的複雜度的時候。拓撲結構的該等表示可包含多值且非二元的數字。拓撲結構的該等表示可表示在不指定源神經網路中活動模式出現的位置時拓撲結構的出現。該源神經網路可為一遞迴神經網路。 This and other implementations may include one or more of the following features. The topology may consist entirely of two or more nodes in the source neural network and one or more edges between the nodes. The training set can contain multiple input vectors, one for each representation. Training the neural network may include training the neural network by treating each of the representations as a target answer vector. Training the neural network may include training the neural network by taking each of the representations as input. The training set can contain multiple reward or regret values. Training the neural network may include reinforcement learning. Topology can contain monoliths. Such representations of topology may represent topologies that occur in the source neural network only when patterns of activity have a complexity that is distinguishable from other activities in response to respective inputs. Such representations of topology may contain multi-valued and non-binary numbers. Such representations of topology may represent the occurrence of topology without specifying where in the source neural network the pattern of activity occurs. The source neural network may be a recurrent neural network.

在圖式及下方描述中闡述了本揭露中所描述的一或多個實施方式的細節。透過參照說明書、圖式及申請專利範圍,本發明的其他特徵、實施方式以及優點將變得顯而易見。 The details of one or more implementations described in the disclosure are set forth in the drawings and the description below. Other features, embodiments, and advantages of the present invention will become apparent by reference to the specification, drawings, and claims.

如下所示: As follows:

100:遞迴人工神經網路裝置 100:Recursive Artificial Neural Network Device

101、102、103、104、105、106、107:節點 101, 102, 103, 104, 105, 106, 107: nodes

110:連結 110:Link

400:流程 400:Process

405、410、415、420、425、430:步驟 405, 410, 415, 420, 425, 430: steps

500:模式 500:Mode

505、510、515、520、525、530:模式 505, 510, 515, 520, 525, 530: Mode

600:模式 600:Mode

605、610:模式 605, 610: Mode

700:模式 700:Mode

705、710:模式 705, 710: Mode

800:資料表 800:Data table

805、810:列 805, 810: column

905:圖表 905: Chart

906、907、908、909:垂直線 906, 907, 908, 909: vertical lines

910:圖表 910: Chart

915、920、925:虛線矩形 915, 920, 925: dashed rectangle

930:峰值 930:peak

935:峰值 935:peak

940:底線 940:bottom line

1000:流程 1000:Process

1005、1010、1015、1020:步驟 1005, 1010, 1015, 1020: steps

1100:流程 1100:Process

1105、1110:步驟 1105, 1110: steps

1200:表示 1200: indicates

1200’:近似 1200’: Approximately

1205、1207、1211、1293、1294、1297:位元 1205, 1207, 1211, 1293, 1294, 1297: bits

1500:子圖 1500:Sub-picture

1505、1510、1515、1520:節點 1505, 1510, 1515, 1520: nodes

1525、1530、1535、1540、1545、1550:邊 1525, 1530, 1535, 1540, 1545, 1550: side

1600:子圖 1600:Sub-picture

1605、1610、1615、1620:節點 1605, 1610, 1615, 1620: nodes

1625、1630、1635、1640、1645:邊 1625, 1630, 1635, 1640, 1645: side

1700:分類系統 1700:Classification system

1705:源神經網路裝置 1705: Source Neural Network Device

1710:線性分類器 1710: Linear classifier

1715:輸入層 1715:Input layer

1720:輸入 1720:Input

1725:輸出 1725:Output

1800:分類系統 1800:Classification system

1810:神經網路分類器 1810: Neural Network Classifier

1820:輸入層 1820:Input layer

1825:輸出層 1825:Output layer

1900:分類系統 1900:Classification system

1905:源近似器 1905: Source Approximator

1915:輸入層 1915:Input layer

1920:輸出層 1920:Output layer

2000:分類系統 2000: Classification system

2100:邊緣裝置 2100:Edge device

2110:光學成像系統 2110: Optical imaging system

2115:圖像處理電子裝置 2115:Image processing electronic devices

2120:源近似器 2120: Source Approximator

2125:表示分類器 2125: represents the classifier

2130:通訊控制器與介面 2130: Communication Controllers and Interfaces

2135:資料端口 2135:Data port

2200:邊緣裝置 2200: Edge device

2215:圖像處理電子裝置 2215:Image processing electronic devices

2225:表示分類器 2225: Indicates the classifier

2230:通訊控制器與介面 2230: Communication Controllers and Interfaces

2235:資料端口 2235:Data port

2240:感測器 2240: Sensor

2245:多輸入源近似器 2245:Multiple input source approximators

2300:系統 2300:System

2305:區域神經網路裝置 2305:Regional Neural Network Device

2310:電話基地台 2310:Telephone base station

2315:無線存取點 2315:Wireless access point

2320:伺服器系統 2320:Server system

2325:資料通訊網路 2325:Data communication network

2400:系統 2400:System

2410:線性處理器 2410:Linear Processor

2420:輸入 2420:Input

2425:輸出 2425:Output

2500:系統 2500:System

2510:神經網路 2510:Neural Network

2520:輸入層 2520:Input layer

2525:輸出層 2525:Output layer

2600:系統 2600:System

2700:系統 2700:System

2800:強化學習系統 2800: Reinforcement Learning Systems

2805:深度神經網路 2805:Deep Neural Networks

2810:致動器 2810: Actuator

2815:感測器 2815: Sensor

2820:教師模組 2820:Teacher Module

2825:資料來源 2825:Source

2830:環境 2830:Environment

第1圖例示了一種遞迴人工神經網路裝置的結構的示意圖。 Figure 1 illustrates a schematic diagram of the structure of a recursive artificial neural network device.

第2圖及第3圖例示了遞迴人工神經網路裝置在不同時間窗中的功能的示意圖。 Figures 2 and 3 illustrate schematic diagrams of the functions of the recurrent artificial neural network device in different time windows.

第4圖例示了一種基於網路中的活動的特徵化來識別遞迴人工神經網路中的決策時刻的流程的流程圖。 Figure 4 illustrates a flowchart of a process for identifying decision moments in a recurrent artificial neural network based on characterization of activity in the network.

第5圖例示了一種可被識別且可用於識別遞迴人工神經網路中的決策時刻的活動模式的示意圖。 Figure 5 illustrates a schematic diagram of an activity pattern that can be identified and used to identify decision moments in a recurrent artificial neural network.

第6圖例示了一種可被識別且可用於識別遞迴人工神經網路中的決策時刻的活動模式的示意圖。 Figure 6 illustrates a schematic diagram of an activity pattern that can be identified and used to identify decision moments in a recurrent artificial neural network.

第7圖例示了一種可被識別且可用於識別遞迴人工神經網路中的決策時刻的活動模式的示意圖。 Figure 7 illustrates a schematic diagram of an activity pattern that can be identified and used to identify decision moments in a recurrent artificial neural network.

第8圖例示了一種可用於確認遞迴人工神經網路裝置中的活動模式的複雜度或活動模式中的排序程度的資料表的示意圖。 FIG. 8 illustrates a schematic diagram of a data table that can be used to confirm the complexity of an activity pattern or the degree of ordering in an activity pattern in a recurrent artificial neural network device.

第9圖例示了一種確認具有可區分複雜度的活動模式的時間點的示意圖。 Figure 9 illustrates a schematic diagram identifying time points for activity patterns with distinguishable complexity.

第10圖例示了一種基於網路中的活動的特徵化而使用遞迴人工神經網路對訊號進行編碼的流程的流程圖。 Figure 10 illustrates a flowchart of a process for encoding signals using a recurrent artificial neural network based on characterization of activity in the network.

第11圖例示了一種基於網路中的活動的特徵化而使用遞迴人工神經網路對訊號進行解碼的流程的流程圖。 Figure 11 illustrates a flowchart of a process for decoding signals using a recurrent artificial neural network based on characterization of activity in the network.

第12圖、第13圖及第14圖例示了拓撲結構的二元形式或表示的示意圖。 Figures 12, 13 and 14 illustrate schematic diagrams of binary forms or representations of topological structures.

第15圖及第16圖例示了對應於不同位元的特徵的存在 (presence)或不存在(absence)彼此間如何不互相獨立的示意圖。 Figures 15 and 16 illustrate the existence of features corresponding to different bits. A schematic diagram of how (presence) or absence (absence) are not independent of each other.

第17圖、第18圖、第19圖及第20圖例示了在四種不同分類系統中使用神經網路中的活動中的拓撲結構的出現的表示的示意圖。 Figures 17, 18, 19 and 20 illustrate schematic representations of the occurrence of topological structures in activity in neural networks using four different classification systems.

第21圖及第22圖例示了包含區域人工神經網路的邊緣裝置(edge devices)的示意圖,其中該區域人工神經網路可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練。 Figures 21 and 22 illustrate schematic diagrams of edge devices including regional artificial neural networks that can emerge through the use of topologies that correspond to activity in the source neural network. Means to train.

第23圖例示了一種系統的示意圖,於該系統中可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練該區域神經網路。 Figure 23 illustrates a schematic diagram of a system in which a regional neural network can be trained by using representations that correspond to the occurrence of topological structures of activity in the source neural network.

第24圖、第25圖、第26圖及第27圖例示了使用四種不同系統中的神經網路中的活動中的拓撲結構的出現的表示的示意圖。 Figures 24, 25, 26 and 27 illustrate diagrams using representations of the emergence of topological structures in activity in neural networks in four different systems.

第28圖例示了一種包含人工神經網路的系統的示意圖,該人工神經網路可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練。 Figure 28 illustrates a schematic diagram of a system including an artificial neural network that can be trained by using representations that correspond to the occurrence of topological structures of activity in the source neural network.

在圖示中相同元件符號表示相同的元件。 In the drawings, the same reference symbols represent the same components.

第1圖例示了一種遞迴人工神經網路裝置100的結構的示意圖。遞迴人工神經網路裝置100是使用由互連節點所組成的系統來模擬由生物神經元所組成的網路中的資訊編碼及其他處理能力的裝置。遞迴人工神經網路裝置100可用硬體、軟體或其組合來實作。 Figure 1 illustrates a schematic diagram of the structure of a recursive artificial neural network device 100. The recursive artificial neural network device 100 is a device that uses a system of interconnected nodes to simulate information encoding and other processing capabilities in a network composed of biological neurons. The recursive artificial neural network device 100 may be implemented in hardware, software, or a combination thereof.

遞迴人工神經網路裝置100的圖示包含透過多個結構連結110互連的多個節點101、102、...、107。節點101、102、...、107是類似於生 物網路中的神經元的離散資訊處理構造。節點101、102、...、107通常處理在連結110中的一或多個上接收的一或多個輸入訊號,以產生在連結110中的一或多個上輸出的一或多個輸出訊號。舉例而言,在一些實施方式中,節點101、102、...、107可以是人工神經元,其對多個輸入訊號進行加權及求和、透過一或多個非線性激勵函數(activation function)傳送所求的和、以及輸出一或多個輸出訊號。 The illustration of a recurrent artificial neural network device 100 includes a plurality of nodes 101 , 102 , . . . , 107 interconnected by a plurality of structural links 110 . Nodes 101, 102, ..., 107 are similar to the The discrete information processing structure of neurons in the Internet of Things. Nodes 101 , 102 , . . . , 107 typically process one or more input signals received on one or more of the links 110 to produce one or more outputs output on one or more of the links 110 signal. For example, in some embodiments, nodes 101, 102, ..., 107 may be artificial neurons that weight and sum multiple input signals through one or more nonlinear activation functions. ) transmits the required sum and outputs one or more output signals.

節點101、102、...、107可用作一累加器(accumulators)。舉例而言,節點101、102、...、107可根據一整合並觸發(integrate-and-fire)模型進行操作,其中一或多個訊號在第一節點中累積,直到達到閾值。在達到閾值之後,第一節點透過沿著一或多個連結110而向所連接的第二節點發送輸出訊號來觸發。接著,第二個節點101、102、...、107累積所接收的訊號,且假如所累積的訊號達到閾值,則第二個節點101、102、...、107將另一個輸出訊號發送到另一個所連接的節點。 Nodes 101, 102, ..., 107 can be used as an accumulator. For example, nodes 101, 102, ..., 107 may operate according to an integrated-and-fire model, where one or more signals are accumulated in the first node until a threshold is reached. After reaching the threshold, the first node triggers by sending an output signal along one or more links 110 to the connected second node. Then, the second node 101, 102, ..., 107 accumulates the received signal, and if the accumulated signal reaches the threshold, the second node 101, 102, ..., 107 sends another output signal to another connected node.

結構連結110是能夠在節點101、102、...、107之間傳輸訊號的連結。為了方便起見,所有結構連結110在本文中接可被視為相同的雙向連結,該雙向連結傳送來自節點101、102、...、107中的第一個節點的訊號至節點101、102、...、107中的第二個節點,其方式與傳送第二節點的訊號至第一個節點的方式相同。然而,並非所有結構連結110皆需要被視為雙向連結。舉例而言,結構連結110的一部分或者全部可以是單向連結,該單向連結將來自節點101、102、...、107中的第一個節點的訊號傳送到節點101、102、...、107中的第二個節點,而不將訊號從第二節點傳送到第一節點。 Structural link 110 is a link capable of transmitting signals between nodes 101, 102, ..., 107. For convenience, all structural links 110 may be considered herein as the same bidirectional link carrying a signal from a first of nodes 101, 102, ..., 107 to nodes 101, 102 ,..., the second node in 107 in the same manner as the method of transmitting the signal of the second node to the first node. However, not all structural links 110 need to be considered bidirectional links. For example, part or all of structural link 110 may be a one-way link that carries a signal from a first of nodes 101, 102, . . . , 107 to nodes 101, 102, .. ., the second node in 107 without transmitting the signal from the second node to the first node.

作為另一示例,在一些實施方式中,結構連結110可具有除 了方向性以外的其它屬性。舉例而言,在一些實施方式中,不同的結構連結110可承載不同大小的訊號,此導致了節點101、102、...、107中的相應節點之間的連接強度不同。作為另一示例,不同的結構連結110可承載不同類型的訊號(例如:抑制(inhibitory)訊號及/或興奮(excitatory)訊號)。事實上,在一些實施方式中,結構連結110可在生物系統中的體細胞之間的連結上進行建模,並且反映其形態、化學、及其他多樣性的至少一部分。 As another example, in some embodiments, structural link 110 may have a structure other than attributes other than directionality. For example, in some embodiments, different structural connections 110 may carry signals of different sizes, which results in different connection strengths between corresponding nodes among the nodes 101, 102, ..., 107. As another example, different structural connections 110 may carry different types of signals (eg, inhibitory signals and/or excitatory signals). Indeed, in some embodiments, structural connections 110 may model connections between somatic cells in biological systems and reflect at least a portion of their morphological, chemical, and other diversity.

在所示的實施方式中,遞迴人工神經網路裝置100是團網路(clique network,或者是一子網路),其中的每個節點101、102、...、107連接到每個其他的節點101、102、...、107。然而,此情況並非限制,亦即,在一些實作方式中,每個節點101、102、...、107可連接到節點101、102、...、107的一適當子集(可透過相同連結或不同的連結,視情況而定)。 In the embodiment shown, the recurrent artificial neural network device 100 is a clique network (or a sub-network) in which each node 101, 102, ..., 107 is connected to each Other nodes 101, 102, ..., 107. However, this is not a limitation, that is, in some implementations, each node 101, 102, ..., 107 may be connected to an appropriate subset of nodes 101, 102, ..., 107 (perhaps via The same link or a different link, as the case may be).

為了清楚地說明,此處示出了僅具有七個節點的遞迴人工神經網路裝置100。通常真實世界的神經網路裝置將包含更多的節點。舉例而言,在一些實施方式中,神經網路裝置可包含數十萬、數百萬甚至數十億個節點。因此,遞迴人工神經網路裝置100可以是較大的遞迴人工神經網路的一部分(亦即,一子網路)。 For clarity of illustration, a recurrent artificial neural network device 100 with only seven nodes is shown here. Typically real-world neural network devices will contain many more nodes. For example, in some implementations, a neural network device may contain hundreds of thousands, millions, or even billions of nodes. Therefore, the recurrent artificial neural network device 100 can be a part (ie, a sub-network) of a larger recurrent artificial neural network.

在生物神經網路裝置中,累積及訊號傳輸的過程需要真實世界中的時間流逝。例如,神經元的體細胞整合隨著時間流逝而接收的輸入,以及從神經元到神經元的訊號傳輸需要由例如訊號傳輸速度與神經元之間的連結性質及長度所決定的時間。因此,生物神經網路裝置的狀態是動態的,並且隨時間而變化。 In biological neural network devices, the process of accumulation and signal transmission requires the passage of real-world time. For example, the soma of a neuron integrates input received over time, and the transmission of signals from neuron to neuron requires time determined by, for example, the speed of signal transmission and the nature and length of the connections between neurons. Therefore, the state of a biological neural network device is dynamic and changes over time.

在人工遞迴神經網路裝置中,時間是人為的並且使用數學結 構來表示。例如,當訊號從一個節點傳遞到另一個節點,其時間可用通常與現實世界的時間流逝無關的人造單位來表示,例如計算機時鐘週期或其他的單位。然而,因為人工遞迴神經網路裝置相對於這些人造單位而改變,故其狀態可被描述為「動態的」。 In artificial recurrent neural network devices, time is artificial and mathematical results are used represented by structure. For example, when a signal passes from one node to another, its time can be expressed in artificial units that generally have nothing to do with the passage of time in the real world, such as computer clock cycles or other units. However, because the artificial recurrent neural network device changes relative to these artificial units, its state can be described as "dynamic."

應注意的是,為便於說明,這些人造單位在本揭露中被稱為「時間」單位。然而,應理解,這些單位是人造的且通常不符合現實世界的時間流逝。 It should be noted that, for ease of explanation, these man-made units are referred to in this disclosure as "time" units. However, it should be understood that these units are artificial and generally do not correspond to the passage of time in the real world.

第2圖及第3圖例示了遞迴人工神經網路裝置100在不同時間窗中的功能的示意圖。因為遞迴人工神經網路裝置100的狀態是動態的,所以可使用在時間窗內發生的訊號傳輸活動來表示遞迴人工神經網路裝置100的功能。此種功能性的描繪通常僅示出連結110的一部分中的活動。具體而言,由於通常不是每個連結110都在特定時間窗內傳送訊號,因此在這些圖示中並非每個連結110被示為對遞迴人工神經網路裝置100的功能有所貢獻。 Figures 2 and 3 illustrate schematic diagrams of functions of the recurrent artificial neural network device 100 in different time windows. Because the state of the recurrent artificial neural network device 100 is dynamic, the signaling activity that occurs within a time window can be used to represent the functionality of the recurrent artificial neural network device 100 . Such functional depictions typically show activity in only a portion of link 110 . Specifically, not every link 110 is shown in these illustrations as contributing to the functionality of the recurrent artificial neural network device 100 because typically not every link 110 transmits signals within a specific time window.

在第2圖及第3圖中,活躍(active)的連結110被示為連接節點101、102、...、107其中一對的相對粗實線。相反地,不活躍的連結110被示為虛線。這種呈現方式僅是為了說明,換句話說,無論連結110是否活躍,由連結110所形成的結構連結都是存在的。然而,這種表現形式突顯了遞迴人工神經網路裝置100的活動以及功能。 In Figures 2 and 3, an active link 110 is shown as a relatively thick solid line connecting a pair of nodes 101, 102, ..., 107. In contrast, inactive links 110 are shown as dashed lines. This presentation is for illustration only. In other words, the structural link formed by link 110 exists regardless of whether link 110 is active or not. However, this representation highlights the activity and functionality of the recurrent artificial neural network device 100 .

除了示意性地示出沿著連結的活動的存在之外,還示意性地示出了活動的方向。具體而言,用以示出連結110為活躍的相對粗實線還包含箭頭,其表示在相關的時間窗期間沿著該連結的訊號傳輸方向。一般而 言,單一個時間窗中的訊號傳輸方向不能將一連結限制為具有與該傳輸方向相同方向性的單向連結。相反地,在一第一時間窗的第一功能圖(functional graph)中,一連結可在一第一方向上活躍,而在一第二時間窗的第二功能圖中,該連結可在相反的方向上活躍。然而,在某些情況下,例如在特地包含單向連結的遞迴人工神經網路裝置100中,訊號傳輸的方向性將最終地指示連結的方向性。 In addition to schematically showing the presence of activity along a link, the direction of the activity is also schematically shown. Specifically, the relatively thick solid line used to show that link 110 is active also contains arrows that represent the direction of signal transmission along the link during the relevant time window. Generally In other words, the direction of signal transmission in a single time window cannot limit a link to a unidirectional link with the same directivity as the direction of transmission. Conversely, in a first functional graph of a first time window, a link can be active in a first direction, and in a second functional graph of a second time window, the link can be active in the opposite direction. active in the direction. However, in some cases, such as in RANN devices 100 that specifically include unidirectional connections, the directionality of signal transmission will ultimately dictate the directionality of the connection.

在前饋神經網路裝置中,訊息僅在單一個方向上(即,向前)移動到位於網路末端的節點的輸出層。在前饋神經網路裝置中,訊號通過網路而傳播到輸出層可代表已做出「決策」,且資訊處理已完成。 In a feedforward neural network device, messages only move in a single direction (i.e., forward) to the output layer of nodes located at the end of the network. In a feedforward neural network device, the propagation of signals through the network to the output layer indicates that a "decision" has been made and information processing has been completed.

相反地,在遞迴神經網路中,節點之間的連接形成循環,且網路中的活動也是動態地進行而沒有容易識別的決策。例如,即使在具有三個節點的遞迴神經網路中,第一節點可向第二節點發送訊號,第二節點則可響應而將訊號發送到第三節點。第三節點同樣可響應而將訊號發送回第一節點。由第一節點所接收的訊號可至少部分地響應於從該相同節點發送的訊號。 In contrast, in a recurrent neural network, connections between nodes form loops, and activity in the network occurs dynamically without easily identifiable decisions. For example, even in a recurrent neural network with three nodes, a first node can send a signal to a second node, and the second node can respond by sending a signal to a third node. The third node can also respond by sending the signal back to the first node. The signal received by the first node may be at least partially responsive to a signal sent from the same node.

第2圖及第3圖中的示意性功能圖透過僅略大於三節點遞迴神經網路的一網路說明了這一點。第2圖中所示的功能性圖示可說明第一時間窗內的活動,而第3圖可說明緊接著第一時間窗的第二時間窗內的活動。如圖所示,訊號傳輸活動的集合源自節點104,並且在第一時間窗期間透過遞迴人工神經網路裝置100大致以順時針方向前進。在第二時間窗中,至少一些訊號傳輸活動返回到節點104。即使在這種簡單的圖示中,訊號傳輸也不會以產生清晰可識別的輸出或結束的方式進行。 The schematic functional diagrams in Figures 2 and 3 illustrate this through a network that is only slightly larger than a three-node recurrent neural network. The functional diagram shown in Figure 2 may illustrate activity within a first time window, while Figure 3 may illustrate activity within a second time window immediately following the first time window. As shown, the set of signaling activity originates from node 104 and proceeds in a generally clockwise direction through the recursive artificial neural network device 100 during the first time window. During the second time window, at least some signaling activity is returned to node 104. Even in this simple illustration, the signal transmission does not occur in a way that produces a clearly identifiable output or end.

當考慮具有例如數千個或更多節點的遞迴神經網路時,可看出訊號傳播可在大量路徑上發生,且這些訊號缺少可清楚識別的「輸出」的位置或時間。雖然網路可被設計成返回至僅有背景活動、甚至是沒有訊號傳輸活動的靜止狀態(quiescent state),但該靜止狀態本身並不代表資訊處理的結果。無論輸入如何,遞迴神經網路總是會返回至靜止狀態。因此,響應於特定輸入,「輸出」或資訊處理的結果被編碼至在遞迴神經網路內發生的活動中。 When considering a recurrent neural network with, for example, thousands or more nodes, it can be seen that signal propagation can occur over a large number of paths, and these signals lack a clearly identifiable "output" location or time. Although networks can be designed to return to a quiescent state with only background activity or even no signaling activity, the quiescent state itself does not represent the result of information processing. Recurrent neural networks always return to a resting state regardless of the input. Thus, the "output" or result of information processing is encoded into the activity that occurs within a recurrent neural network in response to a specific input.

第4圖例示了一種基於網路中的活動的特徵化來識別遞迴人工神經網路中的決策時刻的流程400的流程圖。一決策時刻是遞迴人工神經網路中的活動指示網路響應於輸入的資訊處理結果的時間點。流程400可由一或多個資料處理裝置的系統執行,該系統根據一或多組機器可讀指令的邏輯執行操作。例如,流程400可由一系統執行,該系統與執行用於實作在流程400中所使用的遞迴人工神經網路的軟體的一個或多個計算機的系統相同。 Figure 4 illustrates a flow diagram of a process 400 for identifying decision moments in a recurrent artificial neural network based on characterization of activity in the network. A decision moment is the point in time when the activity in a recursive artificial neural network directs the network to respond to the input information processing results. Process 400 may be performed by a system of one or more data processing devices that perform operations based on the logic of one or more sets of machine-readable instructions. For example, process 400 may be executed by a system that is the same system as one or more computers executing software for implementing the recurrent artificial neural network used in process 400.

執行流程400的系統接收關於一訊號已被輸入至遞迴人工神經網路的通知(標示為405)。在某些情況下,該訊號的輸入為一離散注入事件(discrete injection event),舉例而言,在該離散注入事件中,資訊被注入至一或多個節點及/或神經網路的一或多個連結中。在其他情況下,該訊號的輸入為在一段時間內注入神經網路的一或多個節點及/或連結的一資訊流。該通知指示人工神經網路正在主動處理資訊而非,例如,處於一靜止狀態。在某些情況下(例如:當神經網路退出可識別的靜止狀態時),可從神經網路本身接收通知。 The system executing process 400 receives notification (labeled 405) that a signal has been input to the recurrent artificial neural network. In some cases, the input of the signal is a discrete injection event, for example, in which information is injected into one or more nodes and/or one or more parts of the neural network. Multiple links in progress. In other cases, the input of the signal is a stream of information injected into one or more nodes and/or connections of the neural network over a period of time. This notification indicates that the artificial neural network is actively processing information and is not, for example, in a quiescent state. In certain situations (for example: when the neural network exits a recognizable quiescent state), notifications can be received from the neural network itself.

執行流程400的系統將網路中響應的活動劃分為時間窗集合(標示為410)。當所注入的是一離散事件時,時間窗可在從注入到返回至靜止狀態之期間將時間細分為多個週期,在該等週期中活動顯示可變的複雜度。當所注入的是一資訊流時,注入所持續的時間(以及,可選地,在注入完成後返回靜止狀態所需的時間)可被細分為多個時間窗,且在該等時間窗的期間,活動顯示可變的複雜度。下方將進一步討論確認活動複雜度的各種方法。 The system executing process 400 divides the response activities in the network into time window sets (labeled 410). When a discrete event is injected, the time window can subdivide the time from injection to return to quiescence into periods in which the activity exhibits variable complexity. When the injection is an information stream, the duration of the injection (and, optionally, the time required to return to the quiescent state after the injection is completed) can be subdivided into multiple time windows, and within these time windows During this period, the activity exhibits variable complexity. Various methods of identifying activity complexity are discussed further below.

在某些實施方式中,所有的時間窗的持續時間皆相同,然而該情況並非限制,亦即在某些實施方式中,該等時間窗的持續時間可不相同。舉例而言,在某些實施方式中,時間窗的持續時間可隨著離散注入事件發生的時間而增加。 In some embodiments, the durations of all time windows are the same. However, this is not a limitation. That is, in some embodiments, the durations of the time windows may be different. For example, in some embodiments, the duration of the time window may increase with the time at which discrete injection events occur.

在某些實施方式中,該等時間窗可為一系列連續且獨立的時間窗。在某些其他實施方式中,時間窗在時間上重疊,使得一個時間窗在前一個時間窗結束之前開始。在某些情況下,該等時間窗可以是可隨時移動的時間窗。 In some embodiments, the time windows may be a series of consecutive and independent time windows. In certain other implementations, the time windows overlap in time such that one time window begins before the previous time window ends. In some cases, these time windows may be time windows that can be moved at any time.

在某些實施方式中,針對活動複雜度的不同確認方式來定義時間窗的不同持續時間。舉例而言,對於定義在相對較大數量的節點之間發生的活動的模式,其時間窗可比定義在相對較少數量的節點之間發生的活動的模式時所定義的時間窗具有相對更長的持續時間。例如,在模式500當中(如第5圖所示),在用於識別與模式530一致的活動時所定義的時間窗可比在用於識別與模式505一致的活動時所定義的時間窗更長。 In some embodiments, different durations of the time window are defined for different validation methods of activity complexity. For example, a time window may be relatively longer for a pattern that defines activity that occurs between a relatively larger number of nodes than for a pattern that defines activity that occurs between a relatively smaller number of nodes. duration. For example, in pattern 500 (as shown in Figure 5), the time window defined for identifying activities consistent with pattern 530 may be longer than the time window defined for identifying activities consistent with pattern 505. .

執行流程400的系統識別網路中在不同時間窗內的活動中的 模式(標示為415)。如下方所進一步探討的,可透過將功能圖視為一拓撲空間並且將節點(nodes)視為點(points)來識別活動的模式。在某些實施方式中,所識別的活動的模式是網路的功能圖中的團,例如:有向團。 The system executing process 400 identifies activities in the network within different time windows. mode (labeled 415). As discussed further below, patterns of activity can be identified by viewing the functional graph as a topological space and the nodes as points. In some embodiments, the identified patterns of activity are cliques in a functional graph of the network, such as directed cliques.

執行流程400的系統確認不同時間窗中的活動的模式的複雜度(標示為420)。複雜度可為一有序的活動模式在一時間窗內出現的機率的度量。因此,隨機出現的活動模式將相對簡單。另一方面,以非隨機的順序出現的活動模式將相對複雜。舉例而言,在某些實施方式中,可使用例如活動模式的單體計數(simplex counts)或貝蒂數(Betti number)來測量活動模式的複雜度。 The system executing process 400 identifies the complexity of the pattern of activity in different time windows (labeled 420). Complexity can be a measure of the probability that an ordered pattern of activity occurs within a time window. Therefore, randomly occurring patterns of activity will be relatively simple. On the other hand, activity patterns that occur in a non-random order will be relatively complex. For example, in some embodiments, the complexity of an activity pattern can be measured using, for example, simplex counts or Betti numbers of the activity pattern.

執行流程400的系統確認具有可區分複雜度的活動模式的時間點(標示為425)。特定活動模式可基於向上偏離或向下偏離的複雜度(例如,相對於固定或可變的一基線)來區分。換言之,顯示活動中特別高程度或特別低程度的非隨機順序的活動模式的時間點可被確認。 The system executing process 400 identifies a point in time (labeled 425) that has an activity pattern of distinguishable complexity. Specific activity patterns may be distinguished based on the complexity of upward or downward deviations (eg, relative to a fixed or variable baseline). In other words, time points can be identified that display a non-randomly ordered pattern of activity in a particularly high or low degree of activity.

舉例而言,當訊號輸入是一離散注入事件時,諸如自穩定的一基線的偏差或者自神經網路對各種不同離散注入事件的平均響應的特徵的一曲線的偏差可用於確定可區分的複雜活動模式的時間點。再舉例而言,當訊號輸入是資訊流時,可使用資訊流傳輸期間複雜度的大變化來確定可區分的複雜活動模式的時間點。 For example, when the signal input is a discrete injection event, such as deviation from a stable baseline or deviation from a curve characteristic of the neural network's average response to various discrete injection events can be used to determine the distinguishable complex The time point of active mode. As another example, when the signal input is an information stream, large changes in complexity during the transmission of the information stream can be used to determine the time points at which distinguishable patterns of complex activity occur.

執行流程400的系統基於可區分的複雜活動模式的時間點來安排對神經網路的輸出的讀取(標示為430)。例如,在某些實施方式中,可在可區分的複雜活動模式出現的同時讀取神經網路的輸出。在某些實施方式中,當複雜度的偏差顯示活動中相對高程度的非隨機順序時,所觀察到的 活動模式本身也可作為遞迴人工神經網路的輸出。 The system executing process 400 schedules the reading of the output of the neural network based on time points of distinguishable complex activity patterns (labeled 430). For example, in some embodiments, the output of a neural network can be read while distinguishable complex patterns of activity occur. In certain embodiments, when a deviation in complexity exhibits a relatively high degree of non-random ordering in activities, the observed The activity pattern itself can also be used as the output of a recurrent artificial neural network.

第5圖例示了一種可被識別且可用於識別遞迴人工神經網路中的決策時刻的活動的模式500的示意圖。舉例而言,可在第4圖所示的流程400中的步驟415處識別模式500。 Figure 5 illustrates a schematic diagram of a pattern 500 of activity that can be identified and used to identify decision moments in a recurrent artificial neural network. For example, pattern 500 may be identified at step 415 in flow 400 shown in FIG. 4 .

模式500是遞迴人工神經網路內的活動的一示意圖。在模式500的應用期間,功能圖被視為一拓撲空間,且功能圖的節點被視為點。無論參與活動的特定節點及/或連結的身份為何,與模式500一致的節點及連結中的活動都可被辨識為有序的。舉例而言,當將第一個模式505中的點0當作節點104、將點1當作節點105且將點2當作節點101時,第一個模式505可表示第2圖中的節點101、104及105之間的活動。再舉例而言,當將第一個模式505中的點0當作節點106、將點1當作節點104且將點2當作節點105時,第一個模式505也可表示第3圖中的節點104、105及106之間的活動。有向團中的活動的順序也被指定了。舉例而言,在模式505中,點1和點2之間的活動發生在點0和點1之間的活動之後。 Schema 500 is a schematic diagram of activity within a recursive artificial neural network. During application of mode 500, the functional graph is viewed as a topological space, and the nodes of the functional graph are viewed as points. Activities within nodes and links consistent with pattern 500 may be identified as ordered regardless of the identity of the specific nodes and/or links participating in the activity. For example, when point 0 in the first pattern 505 is taken to be node 104, point 1 is taken to be node 105, and point 2 is taken to be node 101, the first pattern 505 can represent the nodes in Figure 2 Activities between 101, 104 and 105. For another example, when point 0 in the first pattern 505 is regarded as node 106, point 1 is regarded as node 104, and point 2 is regarded as node 105, the first pattern 505 can also represent the node in Figure 3 Activities between nodes 104, 105 and 106. The order of activities in the directed group is also specified. For example, in pattern 505, activity between points 1 and 2 occurs after activity between points 0 and 1.

在所示的實施方式中,模式500都是有向團或有向單體。在這種模式中,活動源自於將訊號發送到模式中的每個其他節點的一源節點。在模式500中,這樣的源節點被指定為點0,而其他節點被指定為點1、2、...。除此之外,在有向團或單一團中,其中一個節點充當一匯聚(sink)節點並接收從該模式中的每個其他節點發送的訊號。在模式500中,這種匯聚節點被指定為模式中具有的最高編號的點。例如,在模式505中,匯聚節點被指定為點2。在模式510中,匯聚節點被指定為點3。在模式515中,匯聚節點被指定為點4,依此類推。因此,由模式500表示的活動以可區分的方式排序。 In the embodiment shown, the patterns 500 are all directed groups or directed monomers. In this pattern, activity originates from a source node that sends signals to every other node in the pattern. In pattern 500, such source node is designated as point 0, while other nodes are designated as points 1, 2, . . . In addition, in a directed clique or a single clique, one of the nodes acts as a sink node and receives signals sent from every other node in the pattern. In pattern 500, such a sink node is designated as the highest numbered point in the pattern. For example, in pattern 505, the sink node is designated point 2. In mode 510, the sink node is designated point 3. In mode 515, the sink node is designated as point 4, and so on. Therefore, the activities represented by pattern 500 are ordered in a distinguishable manner.

每個模式500具有不同數量的點並且反映不同數量的節點中的有序活動。例如,模式505是一個二維(two-dimensional,2D)單體並且反映三個節點中的活動,模式510是一個三維(three-dimensional,3D)單體並且反映四個節點中的活動,諸如此類。隨著模式中點數的增加,排序程度與活動的複雜度也會增加。舉例而言,對於在時間窗內具有特定程度的隨機活動的大量節點集合,其中的某些活動可剛好與模式505一致。然而,隨機活動分別與模式510、515、520、...一致的機率將逐漸地減少。當存在與模式530一致的活動時表示一種相較於存在與模式505一致的活動時更高程度的排序及活動的複雜度。 Each pattern 500 has a different number of points and reflects ordered activity in a different number of nodes. For example, pattern 505 is a two-dimensional (2D) cell and reflects activity in three nodes, pattern 510 is a three-dimensional (3D) cell and reflects activity in four nodes, and so on. . As the number of points in the pattern increases, so does the degree of sequencing and complexity of the activities. For example, for a large set of nodes that have a certain degree of random activity within a time window, some of the activity may coincide with pattern 505. However, the probability that the random activities are consistent with patterns 510, 515, 520, ... respectively, will gradually decrease. The presence of activities consistent with pattern 530 represents a higher degree of ordering and complexity of the activities than when there are activities consistent with pattern 505 .

如前所述,在某些實施方式中,可因活動複雜度的不同確認方式而定義具有不同的持續時間的時間窗。舉例而言,當要識別與模式530一致的活動時,可使用具有比識別與模式505一致的活動時更長的持續時間的時間窗。 As mentioned above, in some embodiments, time windows with different durations may be defined due to different confirmation methods of activity complexity. For example, when activities consistent with pattern 530 are to be identified, a time window with a longer duration may be used than when activities consistent with pattern 505 are identified.

第6圖例示了一種可被識別且可用於識別遞迴人工神經網路中的決策時刻的活動的模式600的示意圖。舉例而言,可在第4圖的流程400中的步驟415處識別模式600。 Figure 6 illustrates a schematic diagram of a pattern 600 of activity that can be identified and used to identify decision moments in a recurrent artificial neural network. For example, pattern 600 may be identified at step 415 in flow 400 of FIG. 4 .

類似於模式500,模式600是遞迴人工神經網路內的活動的示意圖。然而,模式600與模式500的嚴格排序的不同之處在於模式600並非完全是有向團或有向單體。具體而言,模式605及610具有比模式515更低的方向性。除此之外,模式605完全沒有匯聚節點。然而,模式605及610顯示超過透過隨機偶然事件所預期的有序活動的程度,並且可用於確認遞迴人工神經網路中的活動的複雜度。 Similar to pattern 500, pattern 600 is a schematic diagram of activity within a recursive artificial neural network. However, pattern 600 differs from the strict ordering of pattern 500 in that pattern 600 is not entirely a directed group or directed monomer. Specifically, modes 605 and 610 have lower directivity than mode 515. In addition to this, Mode 605 has no sink nodes at all. However, patterns 605 and 610 show a degree of orderly activity that exceeds that expected through random chance events, and can be used to confirm the complexity of activity in recurrent artificial neural networks.

第7圖例示了一種可被識別且可用於識別遞迴人工神經網路中的決策時刻的活動的模式700的示意圖。舉例而言,可在第4圖的流程400中的步驟415處識別模式700。 Figure 7 illustrates a schematic diagram of a pattern 700 of activity that can be identified and used to identify decision moments in a recurrent artificial neural network. For example, pattern 700 may be identified at step 415 in flow 400 of FIG. 4 .

模式700是具有相同維度(即,具有相同數量的點)的有向團或有向單體的集合,其定義了涉及具有比單個團或單體更多點的模式,且其在有向單體的集合內包圍空腔。 Pattern 700 is a collection of cliques or monomers of the same dimension (i.e., with the same number of points), which defines a pattern involving more points than a single clique or monomer, and which is in a clique or monomer. A collection of bodies encloses a cavity.

舉例而言,模式705包含六個不同的三點且二維的模式505,它們一起定義了一二階同源類別(homology class of degree two);而模式710包含八個不同的三點且二維的模式505,它們一起定義了一第二二階同源類別。模式705及710當中的每個三點且二維的模式505可被認為包圍相應的腔。可透過與有向圖相關聯的第「n」個貝蒂數(Betti number)來進行拓撲表示內的此種同源類別的計數。 For example, pattern 705 includes six different three-point and two-dimensional patterns 505 that together define a homology class of degree two; and pattern 710 includes eight different three-point and two-dimensional patterns. Dimensional pattern 505, which together define first, second and second order homology categories. Each three-point, two-dimensional pattern 505 among patterns 705 and 710 may be considered to surround a corresponding cavity. The counting of such homologous classes within the topological representation can be performed by the "n"th Betti number associated with the directed graph.

由諸如模式700的模式所表示的活動顯示了網路內活動的相對高程度的排序,其不太可能是由隨機偶然事件所引起。模式700可用於將活動的複雜度特徵化。 The activity represented by patterns such as pattern 700 shows a relatively high degree of ordering of activity within the network, which is unlikely to be caused by random chance events. Pattern 700 can be used to characterize the complexity of an activity.

在某些實施方式中,於識別決策時刻時,可僅識別某些活動的模式,以及/或者可捨棄或以其他方式忽略所識別的活動的模式的某些部分。舉例而言,如第5圖所示,與五點且四維的單體的模式515一致的活動固有地包含與四點且三維的活動510以及三點且二維的單面模式505一致的活動。例如,第5圖中的四維單體模式515的點0、2、3、4以及點1、2、3、4皆與三維單體模式510一致。在某些實施方式中,於識別決策時刻時,可捨棄或以其他方式忽略包含較少點且因此具有較低維度的模式。 In some embodiments, in identifying decision moments, only certain patterns of activity may be identified, and/or certain portions of the identified patterns of activity may be discarded or otherwise ignored. For example, as shown in Figure 5, activities consistent with the five-point, four-dimensional monolithic pattern 515 inherently include activities consistent with the four-point, three-dimensional activity 510 and the three-point, two-dimensional one-sided pattern 505 . For example, points 0, 2, 3, and 4 and points 1, 2, 3, and 4 of the four-dimensional single body model 515 in Figure 5 are all consistent with the three-dimensional single body model 510. In some implementations, patterns that contain fewer points and therefore have lower dimensions may be discarded or otherwise ignored when identifying decision moments.

作為另一示例,僅有某些活動模式需要被識別。舉例而言,在某些實施方式中,於識別決策時刻時僅使用具有奇數個點(例如三個、五個、七個、...)或偶數個維度(二維、四維、六維、...)的模式。 As another example, only certain activity patterns need to be identified. For example, in some embodiments, only images with an odd number of points (e.g., three, five, seven, ...) or an even number of dimensions (two-dimensional, four-dimensional, six-dimensional, ...) mode.

可以各種不同的方式確認不同時間窗中的遞迴人工神經網路裝置中的活動的模式的複雜度或排序程度。第8圖例示了一種可用於確認遞迴人工神經網路裝置中的活動模式的複雜度或活動模式中的排序程度的資料表800的示意圖。資料表800可用於獨立地或與其他活動一起確認活動模式的複雜度。舉例而言,資料表800可在第4圖的流程400中的步驟420處使用。 The complexity or ordering of patterns in a recurrent artificial neural network device in different time windows can be identified in various ways. FIG. 8 illustrates a schematic diagram of a data table 800 that can be used to confirm the complexity of an activity pattern or the degree of ordering in an activity pattern in a recurrent artificial neural network device. The data sheet 800 can be used to confirm the complexity of the activity pattern independently or in conjunction with other activities. For example, data table 800 may be used at step 420 in process 400 of FIG. 4 .

更具體而言,資料表800包含在時間窗「N」期間的模式出現數量的計數,其中各列呈現了匹配不同維度的模式的活動的數量計數。例如,在所示示例中,列805包含與一或多個三點且二維的模式匹配的活動發生的數量(即,「2032」),而列810包含與一或多個四點且三維的模式匹配的活動發生的數量(即,「877」)。由於模式的出現表示活動具有非隨機的順序,因此數量計數還提供活動模式的總體複雜度的一般特徵化。可為例如在第4圖的流程400中的步驟410中所定義的每個時間窗製作類似於資料表800的表。 More specifically, the data table 800 contains a count of the number of occurrences of the pattern during time window "N", where each column presents a count of the number of activities matching the pattern of different dimensions. For example, in the example shown, column 805 contains the number of activity occurrences that match one or more three-point and two-dimensional patterns (i.e., “2032”), while column 810 contains the number of activity occurrences that match one or more four-point and three-dimensional patterns. The number of occurrences of pattern-matching activity (i.e., "877"). Since the occurrence of a pattern indicates a non-random ordering of activities, quantity counts also provide a general characterization of the overall complexity of activity patterns. A table similar to data table 800 may be made for each time window defined, for example, in step 410 of flow 400 of FIG. 4 .

儘管資料表800包含針對每種類型的活動模式的單獨的行與列,但此情況並非限制。舉例而言,可在資料表800中以及確認複雜度時省略一或多個計數(例如,針對較簡單的模式的計數)。作為另一示例,在某些實施方式中,單一個行或列可包含多個活動模式的出現的計數。 Although the data table 800 contains separate rows and columns for each type of activity mode, this is not a limitation. For example, one or more counts (eg, counts for simpler modes) may be omitted in the data table 800 and when determining complexity. As another example, in some implementations, a single row or column may contain a count of occurrences of multiple activity patterns.

雖然第8圖例示了資料表800中的數量計數,但此情況並非限 制。舉例而言,數字計數可被表示為向量(例如:<2032,877,133,66,48,...>)。無論計數如何呈現,在某些實施方式中,計數可以二元形式表示並且可與數位資料處理的基礎設備相容。 Although Figure 8 illustrates quantity counts in data sheet 800, this is not limited to system. For example, a numeric count can be represented as a vector (eg: <2032,877,133,66,48,...>). Regardless of how the count is presented, in some embodiments the count may be represented in binary form and be compatible with the digital data processing infrastructure.

在某些實施方式中,可對模式的出現次數進行加權或組合以確認排序的程度或複雜度,例如,在第4圖的流程400中的步驟420處。舉例而言,尤拉示性數(Euler characteristic)可透過下式提供活動複雜度的近似值:S 0-S 1+S 2-S 3+… <第1式>當中,「Sn」是具有「n」個點的模式的出現次數(即,維度為「n-1」的模式)。例如,模式可為第5圖的有向團的模式500。 In some embodiments, the occurrences of patterns may be weighted or combined to determine the degree or complexity of the ordering, for example, at step 420 in flow 400 of FIG. 4 . For example, the Euler characteristic can provide an approximation of the activity complexity through the following formula: S 0 - S 1 + S 2 - S 3 +… <Formula 1> Among them, "S n " has The number of occurrences of a pattern of "n" points (i.e., a pattern with dimension "n-1"). For example, the pattern may be the directed clique pattern 500 in Figure 5 .

作為如何對模式的出現次數進行加權以確認排序的程度或複雜度的另一示例,在某些實施方式中,可基於活動的連結的權重來對模式的出現進行加權。具體而言,如前所述,人工神經網路中節點之間的連結強度可以變化,例如,以強度表示連結在訓練期間的活躍程度。當出現沿著相對較強的連結集合的活動時,其加權方式可與當出現沿著相對較弱的連結的集合的相同活動模式時不同。舉例而言,在某些實施方式中,可使用活動的連結的權重的總和來對事件進行加權。 As another example of how the number of occurrences of a pattern may be weighted to determine the degree or complexity of the ordering, in some embodiments, the occurrences of a pattern may be weighted based on the weight of active links. Specifically, as mentioned previously, the strength of connections between nodes in an artificial neural network can vary, e.g., the strength represents how active a connection is during training. When activity occurs along a relatively strong set of links, it may be weighted differently than when the same pattern of activity occurs along a relatively weak set of links. For example, in some implementations, events may be weighted using the sum of the weights of active links.

在某些實施方式中,尤拉示性數或其他的複雜度量測可透過在特定時間窗內匹配的模式的總數及/或特定網路在給定其結構的情況下可能形成的模式的總數來正規化(normalize)。以下第2式及第3式中提供了關於以網路可能形成的模式總數進行正規化的示例。 In some embodiments, the Eurasian index or other complexity metric may be measured by the total number of patterns matched within a specific time window and/or the number of patterns that a specific network may form given its structure. The total number is normalized. Equations 2 and 3 below provide examples of normalization based on the total number of patterns that may be formed by the network.

在某些實施方式中,當出現涉及較大數量的節點的較高維度 的模式時,其權重可比當出現涉及較少數量的節點的較低維度的模式時的權重來得高。舉例而言,形成有向團的機率將隨著維度的增加而迅速減小。具體而言,為了在「n+1」個節點中形成「n」維團(n-clique),需要「(n+1)n/2」個邊都正確地定向。這樣的機率可反映在權重當中。 In some embodiments, when higher dimensions involving a larger number of nodes occur When a pattern occurs, its weight can be higher than when a lower-dimensional pattern involving a smaller number of nodes occurs. For example, the probability of forming a directed clique decreases rapidly as the dimension increases. Specifically, in order to form an "n"-dimensional clique among "n+1" nodes, "(n+1)n/2" edges need to be correctly oriented. Such probabilities can be reflected in the weights.

在某些實施方式中,模式的維度和方向性都可用於針對模式的出現進行加權以及確認活動的複雜度。舉例而言,參照第6圖,有鑑於模式515、605及610之間方向性的差異,當出現五點且四維的模式515時,其權重可比出現五點且四維的模式605、610時來得高。 In some embodiments, both the dimensionality and directionality of patterns can be used to weight the occurrence of patterns and identify the complexity of activities. For example, referring to Figure 6, in view of the difference in directionality between modes 515, 605 and 610, when the five-point and four-dimensional mode 515 appears, its weight is comparable to that of the five-point and four-dimensional mode 605 and 610. high.

透過模式的方向性及維度來確認活動的排序程度或複雜程度的示例可如下式所示:

Figure 108119813-A0305-02-0031-3
當中,「Sx active」表示具有「n」個點的模式的活動出現次數,且「ERN」是等效隨機網路(即,具有相同數量的節點並且隨機地連接的網路)的計算。除此之外,「SC」可由下式所獲得:
Figure 108119813-A0305-02-0031-4
當中,「Sx silent」表示當遞迴人工神經網路靜止時,具有「n」個點的模式的出現次數,並且可被認為體現了網路可能形成的模式的總數。在第2式及第3式中,模式可為例如第5圖中有向團的模式500。 An example of confirming the ordering or complexity of activities through the directionality and dimensions of the pattern is as follows:
Figure 108119813-A0305-02-0031-3
where "S x active " represents the number of active occurrences of a pattern with "n" points, and "ERN" is the calculation of an equivalent random network (i.e., a network with the same number of nodes connected randomly). In addition, "SC" can be obtained by the following formula:
Figure 108119813-A0305-02-0031-4
Among them, "S x silent " represents the number of occurrences of a pattern with "n" points when the recurrent artificial neural network is at rest, and can be considered to reflect the total number of patterns that the network may form. In Equations 2 and 3, the pattern may be, for example, the pattern 500 of the directed clique in Figure 5 .

第9圖例示了一種確認具有可區分複雜度的活動模式的時間點的示意圖。第9圖中所示確認具有可區分複雜度的活動模式的時間點的實施例可單獨地進行或與其他活動結合進行。舉例而言,可在第4圖的流程400中的步驟425處進行具有可區分複雜度的活動模式的時間點的確認。 Figure 9 illustrates a schematic diagram identifying time points for activity patterns with distinguishable complexity. The embodiment of identifying time points with activity patterns of distinguishable complexity shown in Figure 9 can be performed alone or in combination with other activities. For example, the identification of time points with activity patterns of distinguishable complexity may be performed at step 425 in flow 400 of FIG. 4 .

第9圖包含圖表905以及圖表910。圖表905中以沿「x」軸的時間的函數的形式示出了模式的出現。具體而言,各個出現被示意性地示為垂直線906、907、908、909。每一列的出現可為活動匹配相應的模式或者模式類別的實例。舉例而言,最上方一列的出現可為活動匹配第5圖的模式505的實例,第二列的出現可為活動匹配第5圖的模式510的實例,第三列的出現可為活動匹配第5圖的模式515的實例,依此類推。 Figure 9 includes chart 905 and chart 910. The occurrence of patterns is shown in graph 905 as a function of time along the "x" axis. Specifically, each occurrence is schematically shown as vertical lines 906, 907, 908, 909. The occurrence of each column matches the activity to an instance of the corresponding pattern or pattern category. For example, the appearance in the top column may be an instance of the activity matching pattern 505 in Figure 5, the appearance in the second column may be an instance of the activity matching pattern 510 in Figure 5, and the appearance in the third column may be an instance of the activity matching pattern 510 in Figure 5. 5 instance of pattern 515, and so on.

圖表905還包含虛線矩形915、920、925,其示意性地描繪了當活動模式具有可區分的複雜度時的不同時間窗。如圖所示,在由虛線矩形915、920、925所描繪的時間窗的期間,遞迴人工神經網路中的活動與表示複雜度的模式相匹配的機率高於該等時間窗以外的時間窗的期間。 Diagram 905 also contains dashed rectangles 915, 920, 925, which schematically depict different time windows when active patterns have distinguishable complexity. As shown, activity in a recurrent artificial neural network has a higher probability of matching patterns representing complexity during the time windows depicted by dashed rectangles 915, 920, 925 than outside such time windows. window period.

圖表910中以沿「x」軸的時間的函數的形式示出了與該等出現相關的複雜度。圖表910包含複雜度的第一峰值930,其與由虛線矩形915所描繪的時間窗一致,以及複雜度的第二峰值935與由虛線矩形920、925所描繪的時間窗一致。如圖所示,峰值930、935所示的複雜度相對於複雜度的底線940為可區分的。 The complexity associated with these occurrences is shown in graph 910 as a function of time along the "x" axis. Graph 910 includes a first peak 930 of complexity that coincides with the time window depicted by dashed rectangle 915, and a second peak 935 of complexity that coincides with the time window depicted by dashed rectangles 920, 925. As shown, the complexity represented by peaks 930, 935 is distinguishable relative to a bottom line of complexity 940.

在某些實施方式中,讀取遞迴人工神經網路的輸出的時間與具有可區分的複雜度的活動模式的出現是一致的。舉例而言,在第9圖的說明性上下文中,可在峰值930、935處(即,在由虛線矩形915、920、925所描繪的時間窗期間)讀取遞迴人工神經網路的輸出。 In certain embodiments, the time at which the output of a recurrent artificial neural network is read coincides with the occurrence of activity patterns of distinguishable complexity. For example, in the illustrative context of Figure 9, the output of a recurrent artificial neural network may be read at peaks 930, 935 (i.e., during the time windows depicted by dashed rectangles 915, 920, 925) .

當遞迴人工神經網路的輸入為資料流時,特別有益於在遞迴人工神經網路中識別可區分的複雜度。資料流的示例包含例如影片或音頻資料。雖然資料流有一個開始,但通常處理資料流中與資料流的開頭沒有預 設關係的資訊是較理想的。舉例而言,神經網路可執行物件辨識,例如辨識汽車附近的騎自行車的人。無論騎自行車的人是出現在影片串流中的何時,該神經網路理應皆能辨識出騎自行車的人,亦即,無需考慮自影片開始以來的時間。繼續本示例,當資料流被輸入到物件辨識神經網路中時,神經網路中的任何活動模式通常將顯示低或靜止的複雜度。無論串流資料連續地(或幾乎連續地)輸入神經網路裝置,神經網路都將顯示這些低或靜止的複雜程度。然而,當感興趣的物件出現在影片串流中時,活動的複雜度將變得可區分並顯示在影片串流中被辨識出來的該物件的時間。因此,活動的可區分的複雜度的時間點也可作為關於資料流中的資料是否滿足特定標準的「是/否(Yes/No)」輸出。 It is particularly useful to identify distinguishable complexity in recurrent artificial neural networks when the input to the recurrent artificial neural network is a data stream. Examples of data streams include video or audio data, for example. Although the data flow has a beginning, usually processing data flow is not related to the beginning of the data flow. Information that establishes relationships is ideal. For example, neural networks can perform object recognition, such as identifying a cyclist near a car. The neural network should be able to recognize the cyclist no matter when the cyclist appears in the video stream, that is, regardless of the time since the video started. Continuing with this example, when a stream of data is fed into an object recognition neural network, any pattern of activity in the neural network will typically exhibit low or static complexity. Regardless of whether streaming data is fed continuously (or nearly continuously) into a neural network device, the neural network will exhibit these low or static levels of complexity. However, when an object of interest appears in the video stream, the complexity of the activity becomes distinguishable and displays the time at which the object was recognized in the video stream. Therefore, the time point of the distinguishable complexity of the activity can also be used as a "Yes/No" output as to whether the data in the data stream meets certain criteria.

在某些實施方式中,具有可區分複雜度的活動模式除了時間點之外還可提供人工神經網路的輸出的內容。具體而言,參與了與活動模式一致的活動的節點的身份及活動可以被視為遞迴人工神經網路的輸出。因此,所識別的活動模式可代表神經網路的處理結果,以及讀取該決策的時間點。 In certain embodiments, activity patterns with distinguishable complexity may provide content of the output of an artificial neural network in addition to time points. Specifically, the identities and activities of nodes participating in activities consistent with activity patterns can be viewed as the output of a recursive artificial neural network. The identified activity patterns thus represent the processing results of the neural network, as well as the point in time at which that decision was read.

決策的內容可以各種不同的形式表達。舉例而言,在某些實施方式中且如下方所進一步詳述的,決策的內容可表示為二元向量或由一和零所形成的矩陣。例如,每個數字可表示對於預設的節點組及/或預設的持續時間而言是否存在活動模式。在此實施方式中,決策的內容以二元的形式表示,並且可與傳統的數位資料處理的基礎設備相容。 The content of decisions can be expressed in a variety of different forms. For example, in certain embodiments and as described in further detail below, the content of a decision may be represented as a binary vector or a matrix formed of ones and zeros. For example, each number may represent whether an activity pattern exists for a preset node group and/or a preset duration. In this embodiment, the content of the decision is expressed in binary form and is compatible with traditional digital data processing infrastructure.

第10圖例示了一種基於網路中的活動的特徵化而使用遞迴人工神經網路對訊號進行編碼的流程1000的流程圖。訊號可在各種不同的 情境中編碼,例如傳輸、加密及資料儲存。流程1000可由具有一或多個資料處理裝置的系統執行,該系統根據一或多組機器可讀指令的邏輯執行運算。舉例而言,流程1000可由與執行用於實作流程1000中所使用的遞迴人工神經網路的軟體的一或多個計算機的系統相同的系統執行。在某些情況下,流程1000可由執行流程400的相同的資料處理裝置執行。在某些情況下,流程1000可由例如訊號傳輸系統中的編碼器或資料儲存系統的編碼器執行。 Figure 10 illustrates a flow chart of a process 1000 for encoding signals using recurrent artificial neural networks based on characterization of activity in a network. Signals can be found in a variety of Encoding in context, such as transmission, encryption and data storage. Process 1000 may be performed by a system having one or more data processing devices that perform operations based on the logic of one or more sets of machine-readable instructions. For example, process 1000 may be performed by the same system that executes one or more computers that execute software for implementing the recurrent artificial neural network used in process 1000. In some cases, process 1000 may be performed by the same data processing device that performs process 400. In some cases, process 1000 may be performed by, for example, an encoder in a signal transmission system or an encoder in a data storage system.

執行流程1000的系統將訊號輸入至遞迴人工神經網路中(標示為1005)。在某些情況下,訊號的輸入是一離散注入事件。在其他情況下,輸入訊號被串流入遞迴人工神經網路。 The system executing process 1000 inputs the signal into the recurrent artificial neural network (labeled 1005). In some cases, the signal input is a discrete injection event. In other cases, the input signal is streamed into a recurrent artificial neural network.

執行流程1000的系統識別遞迴人工神經網路中的一或多個決策時刻(標示為1010)。舉例而言,系統可透過執行第4圖的流程400來識別一或多個決策時刻。 The system executing process 1000 identifies one or more decision moments (labeled 1010) in a recurrent artificial neural network. For example, the system may identify one or more decision moments by executing process 400 of FIG. 4 .

執行流程1000的系統讀取遞迴人工神經網路的輸出(標示為1015)。如上所述,在某些實施方式中,遞迴人工神經網路的輸出的內容是神經網路中與用於識別決策點的模式匹配的活動。 The system executing process 1000 reads the output of the recursive artificial neural network (labeled 1015). As discussed above, in some embodiments, the output of a recurrent artificial neural network is the activity in the neural network that matches the pattern used to identify the decision point.

在某些實施方式中,可將單獨的「讀取器節點」添加至神經網路中以識別特定節點集合處的特定活動模式的出現,並因此在步驟1015處讀取遞迴人工神經網路的輸出。若且唯若特定節點集合處的活動滿足時間(甚至可能還有大小)的標準時,讀取器節點才能觸發。舉例而言,為了在第2圖及第3圖中的節點104、105、106處讀取第5圖的模式505的出現,讀取器節點可連接到節點104、105、106(或者是該等節點之間的連結110)。如果涉及節點104、105、106(或其連結)的活動模式發生了,則僅讀取器 節點本身成為活躍的。 In some embodiments, individual "reader nodes" can be added to the neural network to identify the occurrence of specific activity patterns at specific sets of nodes and thus read the recursive artificial neural network at step 1015 output. A reader node can fire if and only if activity at a particular set of nodes meets time (and possibly even size) criteria. For example, to read the occurrence of pattern 505 of Figure 5 at nodes 104, 105, 106 in Figures 2 and 3, a reader node can be connected to nodes 104, 105, 106 (or the links between nodes 110). If an activity pattern involving nodes 104, 105, 106 (or their links) occurs, then only the reader The node itself becomes active.

透過使用此讀取器節點,可免除為整個遞迴人工神經網路定義時間窗的必要。具體而言,各讀取器節點可連接到不同的節點及/或數個節點(或它們之間的連結)。可將各讀取器節點設置為具有特製的響應(例如,在一整合並觸發模型中的不同衰減時間(decay time))以識別不同的活動模式。執行流程1000的系統發送或儲存遞迴人工神經網路的輸出(標示為1020)。在步驟1020中所執行的特定動作可反映正在使用流程1000的情境。舉例而言,在期望安全或壓縮通訊的情境中,執行流程1000的系統可將遞迴神經網路的輸出發送到可訪問相同或類似的遞迴神經網路的接收器。作為另一示例,在期望安全或壓縮資料儲存的情境中,執行流程1000的系統可將遞迴神經網路的輸出記錄在一或多個機器可讀資料儲存裝置中以供稍後訪問。 By using this reader node, the need to define time windows for the entire recurrent artificial neural network is eliminated. In particular, each reader node may be connected to a different node and/or to several nodes (or connections between them). Each reader node can be set up to have a tailored response (eg, different decay times in an integrated coalescence triggering model) to recognize different activity patterns. The system executing process 1000 sends or stores the output of the recursive artificial neural network (labeled 1020). The specific actions performed in step 1020 may reflect the context in which process 1000 is being used. For example, in situations where secure or compressed communication is desired, a system performing process 1000 may send the output of a recurrent neural network to a receiver that has access to the same or similar recurrent neural network. As another example, in scenarios where secure or compressed data storage is desired, a system executing process 1000 may record the output of a recurrent neural network in one or more machine-readable data storage devices for later access.

在某些實施方式中,可不發送或儲存遞迴神經網路的完整輸出。舉例而言,在一實施方式中,遞迴神經網路的輸出內容是神經網路中與表示活動複雜度的模式匹配的活動,則可僅發送或儲存匹配相對更複雜或更高維度活動的活動。作為示例,參考第5圖中的模式500,在某些實施方式中,僅發送或儲存與模式515、520、525及530匹配的活動,而忽略或捨棄與模式505及510匹配的活動。透過這種方式,有損流程(lossy process)允許以所編碼資訊的完整性為代價來減少傳輸或儲存的資料量。 In some implementations, the complete output of the recurrent neural network may not be sent or stored. For example, in one embodiment, the output content of the recurrent neural network is an activity in the neural network that matches a pattern representing activity complexity, then only the activity that matches a relatively more complex or higher-dimensional activity can be sent or stored. Activity. As an example, referring to pattern 500 in Figure 5, in some embodiments, only activities matching patterns 515, 520, 525, and 530 are sent or stored, while activities matching patterns 505 and 510 are ignored or discarded. In this way, the lossy process allows the amount of data to be transmitted or stored to be reduced at the expense of the integrity of the encoded information.

第11圖例示了一種基於網路中的活動的特徵化而使用遞迴人工神經網路對訊號進行解碼的流程1100的流程圖。訊號可在各種不同的情境中解碼,例如訊號接收、解密以及從儲存器讀取資料。流程1100可由具 有一或多個資料處理裝置的系統執行,該系統根據一或多組機器可讀指令的邏輯執行運算。舉例而言,流程1100可由與執行用於實作流程1100中所使用的遞迴人工神經網路的軟體的一或多個計算機的系統相同的系統執行。在某些情況下,流程1100可由執行流程400及/或流程1000的相同的資料處理裝置執行。在某些情況下,流程1100可由例如訊號傳輸系統中的解碼器或資料儲存系統的解碼器執行。 Figure 11 illustrates a flow chart of a process 1100 for decoding signals using a recurrent artificial neural network based on characterization of activity in the network. Signals can be decoded in a variety of different contexts, such as signal reception, decryption, and reading data from storage. Process 1100 can be implemented by A system execution of one or more data processing devices that performs operations based on the logic of one or more sets of machine-readable instructions. For example, process 1100 may be performed by the same system that executes one or more computers that execute software for implementing the recurrent artificial neural network used in process 1100 . In some cases, process 1100 may be performed by the same data processing device that performs process 400 and/or process 1000. In some cases, process 1100 may be performed by, for example, a decoder in a signal transmission system or a decoder in a data storage system.

執行流程1100的系統接收遞迴人工神經網路的輸出的至少一部分(標示為1105)。在1105執行的特定動作可反映正在使用流程1100的情境。舉例而言,執行流程1000的系統可接收包含遞迴人工神經網路的輸出的發送訊號,或者讀取儲存遞迴人工神經網路的輸出的機器可讀資料儲存裝置。 The system executing process 1100 receives at least a portion of the output of the recurrent artificial neural network (labeled 1105). The specific actions performed at 1105 may reflect the context in which process 1100 is being used. For example, a system executing process 1000 may receive a transmission signal containing an output of a recurrent artificial neural network, or read a machine-readable data storage device storing an output of a recurrent artificial neural network.

執行流程1100的系統從接收的輸出重建遞迴人工神經網路的輸入(標示為1110)。重建可以各種不同的方式進行。舉例而言,在某些實施方式中,可訓練第二人工神經網路(遞迴或非遞迴)以從步驟1105處所接收的輸出重建進入遞迴神經網路的輸入。 The system executing process 1100 reconstructs the input of the recursive artificial neural network from the received output (labeled 1110). Reconstruction can be done in a variety of different ways. For example, in some implementations, a second artificial neural network (recurrent or non-recurrent) may be trained to reconstruct the input into the recurrent neural network from the output received at step 1105 .

作為另一示例,在某些實施方式中,可訓練已使用機器學習(包含但不限於深度學習)訓練過的解碼器以從在步驟1105處所接收的輸出重建進入遞迴神經網路的輸入。 As another example, in some embodiments, a decoder that has been trained using machine learning, including but not limited to deep learning, can be trained to reconstruct the input into the recurrent neural network from the output received at step 1105.

作為另一示例,在某些實施方式中,可迭代地置換輸入到相同的遞迴人工神經網路或類似的遞迴人工神經網路,直到該遞迴人工神經網路的輸出在某種程度上匹配在步驟1105處所接收的輸出。 As another example, in some embodiments, inputs to the same recurrent artificial neural network or similar recurrent artificial neural networks may be iteratively permuted until the output of the recurrent artificial neural network is at some level matches the output received at step 1105.

在某些實施方式中,流程1100可包含接收使用者輸入,該使 用者輸入指示要重建輸入的程度,並且可相應地在步驟1110處調整重建以作為響應。舉例而言,使用者輸入可以指定不需要完整的重建。執行流程1100的系統調整重建以作為響應。例如,在一實施方式中,遞迴神經網路的輸出內容是神經網路中與指示活動複雜度的模式匹配的活動,僅有針對與相對更複雜或更高維度活動匹配的活動進行特徵化的輸出會被用於重建輸入。作為示例,參照第5圖的模式500,在某些實施方式中,可僅使用與模式515、520、525及530匹配的活動來重建輸入,而與模式505及510匹配的活動可被忽略或捨棄。透過這種方式,可在選定的情況下進行有損重建。 In some embodiments, process 1100 may include receiving user input that causes The user input indicates the extent to which the input is to be reconstructed, and the reconstruction may be adjusted accordingly at step 1110 in response. For example, user input may specify that a complete rebuild is not required. A system adjustment rebuild of process 1100 is performed in response. For example, in one embodiment, the output of the recurrent neural network is the activities in the neural network that match a pattern indicative of activity complexity, and only activities that match relatively more complex or higher-dimensional activities are characterized. The output of will be used to reconstruct the input. As an example, referring to pattern 500 of Figure 5, in some implementations, only activities matching patterns 515, 520, 525, and 530 may be used to reconstruct input, while activities matching patterns 505 and 510 may be ignored or Give up. In this way, lossy reconstruction can be performed under selected circumstances.

在某些實施方式中,流程1000及流程1100可用於對等加密通訊。具體而言,發送器(即,編碼器)以及接收器(即,解碼器)皆可配備相同的遞迴人工神經網路。制定共享的遞迴人工神經網路以確保第三方無法對其進行逆向工程並解密訊號的幾種方法可包含:遞迴人工神經網路的結構;遞迴人工神經網路的功能設置,包含節點狀態和邊緣權重;模式的尺寸(或維度);以及每個維度中的模式的分數(fraction)。這些參數可被視為多個層次,其共同確保了傳輸的安全性。除此之外,在某些實現中,決策時刻的時間點可用作解密訊號的密鑰。 In some embodiments, process 1000 and process 1100 may be used for peer-to-peer encrypted communications. Specifically, both the transmitter (i.e., encoder) and the receiver (i.e., decoder) can be equipped with the same recurrent artificial neural network. Several ways to formulate a shared RANN to ensure that third parties cannot reverse engineer it and decrypt the signal could include: the structure of the RANN; the functional setup of the RANN, including nodes state and edge weights; the size (or dimensions) of the pattern; and the fraction of the pattern in each dimension. These parameters can be viewed as multiple layers, which together ensure the security of the transmission. Additionally, in some implementations, the point in time of the decision moment can be used as a key to decrypt the signal.

儘管流程1000及流程1100被呈現為針對單一個遞迴人工神經網路進行編碼及解碼,但是流程1000及流程1100也可應用於仰賴多個遞迴人工神經網路的系統及流程中。這些遞迴人工神經網路可平行地或串聯地運行。 Although process 1000 and process 1100 are shown for encoding and decoding a single recurrent artificial neural network, process 1000 and process 1100 may also be applied to systems and processes that rely on multiple recurrent artificial neural networks. These recurrent artificial neural networks can run in parallel or in series.

作為串聯地運行的示例,第一遞迴人工神經網路的輸出可用作第二遞迴人工神經網路的輸入。第二遞迴人工神經網路的結果輸出是對第一遞迴人工神經網路的輸入二次編碼(或二次加密)的版本。這種遞迴人工神經網路的串聯式排列在各方對於資訊具有不同的訪問層級的情況下是有用的,舉例而言,在醫療記錄系統中,患者的身份資訊可能無法被某個將會使用且可存取其他部分的醫療記錄的一方所存取。 As an example of running in series, the output of a first recurrent artificial neural network can be used as the input of a second recurrent artificial neural network. The resulting output of the second recurrent artificial neural network is a twice-encoded (or twice-encrypted) version of the input to the first recurrent artificial neural network. This tandem arrangement of recursive artificial neural networks is useful in situations where parties have different levels of access to the information. For example, in a medical records system, the patient's identity information may not be accessible to a certain party. Access by a party who uses and has access to other parts of the medical record.

作為平行的運行的示例,可將相同的資訊輸入到多個不同的遞迴人工神經網路中。舉例而言,可使用該等神經網路的不同輸出來確保有辦法高度保真地(with high fidelity)重建輸入。 As an example of parallel operation, the same information can be fed into multiple different recurrent artificial neural networks. For example, the different outputs of the neural networks can be used to ensure that the input can be reconstructed with high fidelity.

可對上述描述的許多實施方式進行各種修改。舉例而言,儘管應用程式通常表示遞迴人工神經網路內的活動應該與表示排序的模式匹配,但此情況並非限制。相反地,在某些實施方式中,遞迴人工神經網路內的活動可與模式一致,但不必顯示與模式匹配的活動。例如,遞迴神經網路顯示與模式匹配的活動的機率增加可被視為活動的非隨機排序。 Various modifications may be made to the many embodiments described above. For example, although applications typically indicate that activity within a recurrent artificial neural network should match patterns representing ordering, this is not a limitation. Conversely, in some embodiments, activity within a recurrent artificial neural network may be consistent with a pattern, but does not necessarily display activity that matches the pattern. For example, the increased probability that a recurrent neural network displays activity that matches a pattern can be viewed as a non-random ordering of activities.

作為另一示例,在某些實施方式中,可制定不同的模式組以針對不同的遞迴人工神經網路中的活動進行特徵化。例如,制定模式的方式可以是根據在特徵化不同的遞迴人工神經網路的活動時模式的有效性(effectiveness)。該有效性可透過例如基於顯示不同模式出現的計數的表或向量的大小來量化有效性。 As another example, in certain embodiments, different sets of patterns may be formulated to characterize activity in different recurrent artificial neural networks. For example, the model may be formulated based on its effectiveness in characterizing the activity of different recurrent artificial neural networks. The effectiveness may be quantified by, for example, based on the size of a table or vector showing counts of occurrences of different patterns.

作為另一示例,在某些實施方式中,用於特徵化遞迴人工神經網路中的活動的模式可考量節點之間的連結的強度。換言之,此前描述的模式以二元的方式(即,活動存在或不存在)來處理二個節點之間的所有訊 號傳輸活動。然而,此情況並非限制。相反地,在某些實施方式中,可能需要將具有某級別或強度的連結的活動視為顯示遞迴人工神經網路的活動中的有序的複雜度,才能與一模式一致。 As another example, in some embodiments, patterns used to characterize activity in a recurrent artificial neural network may consider the strength of connections between nodes. In other words, the pattern described previously handles all information between two nodes in a binary manner (i.e., activity is present or absent). No. transmission activities. However, this situation is not a limitation. Conversely, in some embodiments, activity with a certain level or strength of connections may need to be viewed as exhibiting ordered complexity in the activity of a recurrent artificial neural network in order to be consistent with a pattern.

作為另一示例,遞迴人工神經網路的輸出的內容可包含在時間窗之外發生的活動模式,且其中神經網路中的活動具有可區分的複雜度。舉例而言,參照第10圖,在步驟1015中讀取並在步驟1020中發送或儲存的遞迴人工神經網路的輸出可包含資訊編碼活動模式,該資訊編碼活動模式發生在例如第9圖的圖表905中的虛線矩形915、920及925的外部。舉例而言,無論這些活動模式何時發生,遞迴人工神經網路的輸出可僅針對最高維度的活動模式進行特徵化。作為另一示例,無論這些活動模式何時發生,遞迴人工神經網路的輸出可僅針對包圍腔的活動模式進行特徵化。 As another example, the content of the output of a recurrent artificial neural network may contain patterns of activity that occur outside a time window, and wherein the activity in the neural network has a distinguishable complexity. For example, referring to Figure 10, the output of the recurrent artificial neural network read in step 1015 and sent or stored in step 1020 may include patterns of information encoding activity that occur in, for example, Figure 9 outside the dashed rectangles 915, 920, and 925 in diagram 905. For example, the output of a recurrent artificial neural network can be characterized only for the highest dimensional activity patterns, regardless of when these activity patterns occur. As another example, the output of a recurrent artificial neural network can be characterized only for the activity patterns surrounding the cavity, regardless of when these activity patterns occur.

第12圖、第13圖及第14圖例示了拓撲結構(例如:神經網路中的活動模式)的二元形式或表示1200的示意圖。第12圖、第13圖及第14圖中示出的拓撲結構都包含相同的資訊,亦即在一圖表中特徵存在或不存在的表示。該特徵可以是例如神經網路裝置中的活動。在某些實施方式中,該活動是基於一段期間來識別,且在該期間中神經網路中的活動具有相較於響應於輸入的其他活動為可區分的複雜度。 Figures 12, 13 and 14 illustrate schematic diagrams of binary forms or representations 1200 of topological structures (eg, activity patterns in neural networks). The topologies shown in Figures 12, 13 and 14 all contain the same information, namely the presence or absence of features in a graph. The feature may be, for example, activity in a neural network device. In some embodiments, the activity is identified based on a period in which the activity in the neural network has a complexity that is distinguishable compared to other activity in response to the input.

如第12圖所示,二元表示1200包含位元1205、1207、1211、1293、1294、1297以及額外任意數量的位元(由「...」所表示)。為便於說明,位元1205、1207、1211、1293、1294、1297、...被繪示為離散的矩形,並且以矩形被填充與否來表示該位的二元值。於第12圖、第13圖及第14圖中,二元表示1200表面上看起來是由位元組成的一維向量(第12圖及第13 圖)或二維矩陣(第14圖)。然而,表示1200與向量、矩陣或其他有序位元集合的不同之處在於,無論位元的順序如何(亦即,無論集合內各個位元的位置如何),都可編碼相同的資訊。 As shown in Figure 12, binary representation 1200 includes bits 1205, 1207, 1211, 1293, 1294, 1297 and any additional number of bits (represented by "..."). For ease of illustration, bits 1205, 1207, 1211, 1293, 1294, 1297, ... are shown as discrete rectangles, and the binary value of the bit is represented by whether the rectangle is filled or not. In Figures 12, 13 and 14, the binary representation 1200 appears to be a one-dimensional vector composed of bits (Figures 12 and 13 Figure) or a two-dimensional matrix (Figure 14). However, representation 1200 differs from a vector, matrix, or other ordered collection of bits in that the same information can be encoded regardless of the order of the bits (that is, regardless of the position of each bit within the collection).

舉例而言,在某些實施方式中,無論在圖中一拓撲特徵的位置如何,每個單獨的位元1205、1207、1211、1293、1294、1297、...可表示該拓撲特徵是否存在。作為示例,如第2圖所示,諸如位元1207的位元可表示存在與第5圖中的模式505一致的拓撲特徵,而不管該活動是在節點104、105及101之間還是在節點105、101及102之間發生。儘管每個單獨的位元1205、1207、1211、1293、1294、1297、...可與特定特徵相關聯,但是該圖中該特徵的位置不需要例如透過表示1200中位元的相應位置而被編碼。換言之,在某些實施方式中,表示1200可僅提供圖的同構拓撲重建(isomorphic topological reconstruction)。 For example, in some embodiments, each individual bit 1205, 1207, 1211, 1293, 1294, 1297, ... may indicate whether a topological feature is present regardless of its location in the diagram. . As an example, as shown in Figure 2, a bit such as bit 1207 may indicate the presence of a topological feature consistent with pattern 505 in Figure 5, regardless of whether the activity is between nodes 104, 105, and 101 or between nodes 104, 105, and 101. Occurs between 105, 101 and 102. Although each individual bit 1205, 1207, 1211, 1293, 1294, 1297, ... may be associated with a particular feature, the location of that feature in the diagram need not be determined, for example, by representing the corresponding location of the bits in 1200 is encoded. In other words, in some embodiments, representation 1200 may simply provide isomorphic topological reconstruction of the graph.

另一方面,在其他實施方式中,各個位元1205、1207、1211、1293、1294、1297、...的位置確實可編碼諸如圖中特徵位置的資訊。在這些實施方式中,可使用表示1200來重構源圖(source graph)。然而,不一定需要存在這樣的編碼方式。 On the other hand, in other embodiments, the positions of the respective bits 1205, 1207, 1211, 1293, 1294, 1297, . . . can indeed encode information such as feature positions in the figure. In these implementations, representation 1200 may be used to reconstruct a source graph. However, such an encoding does not necessarily need to exist.

有鑑於位元可表示拓撲特徵的存在與否,而無論圖中該特徵的位置如何,如第12圖所示,位元1205出現在表示1200的開頭處且在位元1207之前,而位元1207出現在位元1211之前。在第13圖及第14圖中,表示1200內的位元1205、1207及1211的順序相對於表示1200內的其他位元的位置已改變。然而,二元表示1200仍保持相同,而定義二元表示1200中編碼資訊的過程的規則或演算法集合也保持相同。只要位元和特徵之間的對應關 係是已知的,則位元在表示1200中的位置便是無關緊要的。 Since a bit can represent the presence or absence of a topological feature regardless of its position in the diagram, as shown in Figure 12, bit 1205 appears at the beginning of representation 1200 and before bit 1207, while bit 1207 appears before bit 1211. In Figures 13 and 14, the order of bits 1205, 1207 and 1211 within representation 1200 has been changed relative to the position of other bits within representation 1200. However, the binary representation 1200 remains the same, and the set of rules or algorithms that define the process of encoding information in the binary representation 1200 remains the same. As long as the correspondence between bits and features The system is known, then the position of the bit in the representation 1200 is irrelevant.

更具體而言,每個位元1205、1207、1211、1293、1294、1297、...分別表示圖中特徵的存在與否。圖是由一組節點及這些節點之間的一組邊所形成。節點可對應於物件。該物件的示例可包含例如神經網路中的人工神經元、社交網路中的個體等。邊可對應於物件之間的某種關係。關係的示例包含例如結構連結或沿著連結的活動。在神經網路的情境下,人工神經元可透過神經元之間的結構連結或透過沿結構連結傳輸資訊來彼此關聯。在社交網路的情境中,每個人可透過「朋友」或其他關係的連結或透過沿著這種連結傳輸資訊(例如,發佈貼文)來關聯。因此,邊可特徵化節點集當中相對長久的結構特徵或在限定的時間範圍內發生的相對瞬態的活動特徵。除此之外,邊可為有向或雙向的。有向邊表示物件之間關係的方向性。舉例而言,從第一神經元到第二神經元的資訊傳輸可由表示傳輸方向的有向邊表示。作為另一示例,在社交網路中,關係連結可表示第二使用者將從第一使用者接收資訊,而非第一使用者將從第二使用者接收資訊。在拓撲術語中,圖表可表示為一組單位區間「[0,1]」,其中「0」和「1」分別表示為由一邊連接的相應節點。 More specifically, each bit 1205, 1207, 1211, 1293, 1294, 1297, ... respectively represents the presence or absence of a feature in the image. A graph is formed by a set of nodes and a set of edges between these nodes. Nodes can correspond to objects. Examples of such objects may include, for example, artificial neurons in neural networks, individuals in social networks, etc. Edges can correspond to some relationship between objects. Examples of relationships include, for example, structural links or activities along links. In the context of neural networks, artificial neurons can be related to each other through structural connections between neurons or by transmitting information along structural connections. In the context of a social network, each person may be related through links of "friends" or other relationships, or by transmitting information along such links (for example, posting). Thus, edges can characterize relatively long-lasting structural features in a node set or relatively transient activity features that occur within a limited time frame. In addition, edges can be directed or bidirectional. Directed edges represent the directionality of relationships between objects. For example, the transmission of information from a first neuron to a second neuron can be represented by a directed edge indicating the direction of transmission. As another example, in a social network, a relationship link may indicate that a second user will receive information from a first user, but not that the first user will receive information from a second user. In topological terms, a graph can be represented as a set of unit intervals "[0,1]", where "0" and "1" are represented by corresponding nodes connected by an edge.

由位元1205、1207、1211、1293、1294及1297表示其存在與否的特徵可為例如節點、一組節點、多組節點當中的一組,一組邊、多組邊當中的一組及/或其他分層更複雜的特徵(例如:多組節點當中的多組節點中的一組節點)。位元1205、1207、1211、1293、1294及1297通常表示處於不同層級的特徵的存在與否。舉例而言,位元1205可表示單一節點的存在與否,而位元1205也可表示一組節點的存在與否。 Features whose presence or absence is represented by bits 1205, 1207, 1211, 1293, 1294, and 1297 may be, for example, a node, a group of nodes, one of a plurality of groups of nodes, a group of edges, a group of a plurality of groups of edges, and / Or other hierarchically more complex features (for example: a set of nodes within multiple sets of nodes within multiple sets of nodes). Bits 1205, 1207, 1211, 1293, 1294, and 1297 generally represent the presence or absence of features at different levels. For example, bit 1205 may represent the presence or absence of a single node, and bit 1205 may represent the presence or absence of a group of nodes.

在某些實施方式中,位元1205、1207、1211、1293、1294及1297可表示圖中的具有某些特徵的門檻值標準的特徵。舉例而言,位元1205、1207、1211、1293、1294及1297不僅可表示在一組邊緣中存在活動,而且還表示該活動的權重是在門檻值標準之上或之下。舉例而言,權重可體現神經網路裝置對特定目的的訓練,或者可為邊的固有特徵。 In some embodiments, bits 1205, 1207, 1211, 1293, 1294, and 1297 may represent features in the graph that have threshold criteria for certain features. For example, bits 1205, 1207, 1211, 1293, 1294, and 1297 may indicate not only the presence of activity in a set of edges, but also whether the weight of that activity is above or below a threshold criterion. For example, the weights may reflect the training of the neural network device for a specific purpose, or may be intrinsic characteristics of the edges.

上方的第5圖、第6圖及第8圖示出了可由位元1205、1207、1211、1293、1294、1297、...表示其存在與否的特徵。 Figures 5, 6 and 8 above illustrate features whose presence or absence can be represented by bits 1205, 1207, 1211, 1293, 1294, 1297, ....

模式500、600及700集合中的有向單體將功能圖或結構圖視為以節點作為點的拓撲空間。無論參與活動的特定節點及/或連結的身份為何,都可使用位元來表示涉及一或多個節點的結構或活動以及與模式500、600及700集合中的單體一致的連結。 The directed monomers in the pattern 500, 600, and 700 sets treat functional or structural diagrams as topological spaces with nodes as points. Regardless of the identity of the specific nodes and/or links participating in the activity, bits may be used to represent structures or activities involving one or more nodes and links consistent with the entities in the pattern 500, 600, and 700 sets.

在某些實施方式中,可僅識別結構或活動的某些模式,且/或可捨棄或忽略所識別的結構或活動模式的某些部分。例如,如第5圖所示,與五點且四維的單體模式515一致的結構或活動固有地包含與四點且三維的單體模式510及三點且二維的單體模式505一致的結構或活動。舉例而言,第5圖的四維的單體模式515中的點0、2、3、4以及點1、2、3、4均與三維單體模式510一致。在某些實施方式中,可捨棄或忽略包含較少點並且因此具有較低維度的單體模式。 In some embodiments, only certain patterns of structure or activity may be identified, and/or certain portions of the identified patterns of structure or activity may be discarded or ignored. For example, as shown in Figure 5, structures or activities consistent with the five-point, four-dimensional cell pattern 515 inherently include those consistent with the four-point, three-dimensional cell pattern 510 and the three-point, two-dimensional cell pattern 505. structure or activity. For example, points 0, 2, 3, and 4 and points 1, 2, 3, and 4 in the four-dimensional single body model 515 in Figure 5 are all consistent with the three-dimensional single body model 510. In some embodiments, monolithic patterns that contain fewer points and therefore have lower dimensions may be discarded or ignored.

作為另一示例,僅需要識別某一些結構或活動的模式。舉例而言,在某些實施方式中,僅使用了具有奇數個點(例如:三個、五個、七個等)或偶數維度(例如:二維、四維、六維等)的模式。 As another example, only certain patterns of structure or activity need to be identified. For example, in some embodiments, only patterns with an odd number of points (eg, three, five, seven, etc.) or an even number of dimensions (eg, two, four, six, etc.) are used.

重新參照第12圖、第13圖及第14圖,由位元1205、1207、1211、 1293、1294、1297等來表示其存在與否的特徵可不彼此獨立。具體而言,假如位元1205、1207、1211、1293、1294、1297表示零維(zero-dimensional,0-D)單體的存在與否,且其中每個零維單體都反映單個節點的存在或活動,則位元1205、1207、1211、1293、1294、1297彼此獨立。然而,假如位元1205、1207、1211、1293、1294、1297表示更高維度的單體的存在與否,且其中該等更高維度的單體各自反映多個節點的存在或活動,則由每個單獨特徵的存在與否所進行編碼的資訊可不依賴於其他特徵的存在與否。 Referring again to Figure 12, Figure 13 and Figure 14, from bits 1205, 1207, 1211, 1293, 1294, 1297, etc. to indicate the presence or absence of features may not be independent of each other. Specifically, if bits 1205, 1207, 1211, 1293, 1294, and 1297 represent the presence or absence of zero-dimensional (0-D) cells, and each zero-dimensional cell reflects the value of a single node, existence or activity, bits 1205, 1207, 1211, 1293, 1294, and 1297 are independent of each other. However, if bits 1205, 1207, 1211, 1293, 1294, 1297 represent the presence or absence of higher-dimensional cells, and each of these higher-dimensional cells reflects the presence or activity of multiple nodes, then by The presence or absence of each individual feature encodes information that is independent of the presence or absence of other features.

第15圖例示了一種對應於不同位元的特徵的存在或不存在彼此間如何不互相獨立的示意圖。具體而言,當中示出了包含四個節點1505、1510、1515及1520以及六個有向的邊1525、1530、1535、1540、1545及1550的子圖1500。更具體而言,邊1525從節點1505指向節點1510,邊1530從節點1515指向節點1505,邊1535從節點1520指向節點1505,邊1540從節點1520指向節點1510,邊緣1545是從節點1515指向節點1510,邊緣1550從節點1515指向節點1520。 Figure 15 illustrates a schematic diagram of how the presence or absence of features corresponding to different bits is not independent of each other. Specifically, a subgraph 1500 is shown that includes four nodes 1505, 1510, 1515, and 1520 and six directed edges 1525, 1530, 1535, 1540, 1545, and 1550. More specifically, edge 1525 points from node 1505 to node 1510, edge 1530 points from node 1515 to node 1505, edge 1535 points from node 1520 to node 1505, edge 1540 points from node 1520 to node 1510, and edge 1545 points from node 1515 to node 1510. , edge 1550 points from node 1515 to node 1520.

表示1200中的單個位元(例如,第12圖、第13圖及第14圖中被填滿的位元1207)可表示有向的三維單體的存在。舉例而言,這樣的位元可表示由節點1505、1510、1515及1520以及邊1525、1530、1535、1540、1545及1550所形成的三維單體的存在。表示1200中的第二個位元(例如,第12圖、第13圖及第14圖中被填滿的位元1293)可表示有向的二維單體的存在。舉例而言,這樣的位元可表示由節點1515、1505及1510以及邊1525、1530及1545所形成的二維單體的存在。在這個簡單的示例中,由位元1293所編碼的資訊對於由位元1207所編碼的資訊而言是完全冗餘的。 A single bit in representation 1200 (eg, filled bit 1207 in FIGS. 12, 13, and 14) may indicate the presence of a directed three-dimensional cell. For example, such bits may represent the existence of a three-dimensional entity formed by nodes 1505, 1510, 1515, and 1520 and edges 1525, 1530, 1535, 1540, 1545, and 1550. The second bit in representation 1200 (eg, the filled bit 1293 in Figures 12, 13, and 14) may indicate the presence of a directed two-dimensional monomer. For example, such bits may represent the existence of a two-dimensional entity formed by nodes 1515, 1505, and 1510 and edges 1525, 1530, and 1545. In this simple example, the information encoded by bit 1293 is completely redundant to the information encoded by bit 1207.

應注意的是,由位元1293所編碼的資訊對於由更後續的位元所編碼的資訊而言也可能是冗餘的。舉例而言,由位元1293編碼的資訊對於表示存在額外的有向二維單體的第三位元及第四位元而言都是冗餘的。該等單體的示例由節點1515、1520、1510及邊1540、1545、1550以及節點1520、1505、1510及邊1525、1535、1540所形成。 It should be noted that the information encoded by bit 1293 may also be redundant to the information encoded by subsequent bits. For example, the information encoded by bit 1293 is redundant with the third and fourth bits indicating the presence of additional directional two-dimensional cells. Examples of such cells are formed by nodes 1515, 1520, 1510 and edges 1540, 1545, 1550 and nodes 1520, 1505, 1510 and edges 1525, 1535, 1540.

第16圖例示了另一種對應於不同位元的特徵的存在或不存在彼此間如何不互相獨立的示意圖。具體而言,當中示出了包含四個節點1605、1610、1615及1620以及五個有向的邊1625、1630、1635、1640及1645的子圖1600。節點1505、1510、1515及1520以及邊1625、1630、1635、1640、1645一般而言對應於第15圖的子圖1500當中的節點1505、1510、1515及1520以及邊1525、1530、1535、1540及1545。然而,子圖1600與子圖1500(其節點1515、1520透過邊1550連接)相反,其節點1615及1620不透過邊連接。 Figure 16 illustrates another schematic diagram of how the presence or absence of features corresponding to different bits are not independent of each other. Specifically, a subgraph 1600 is shown that includes four nodes 1605, 1610, 1615, and 1620 and five directed edges 1625, 1630, 1635, 1640, and 1645. Nodes 1505, 1510, 1515, and 1520 and edges 1625, 1630, 1635, 1640, and 1645 generally correspond to nodes 1505, 1510, 1515, and 1520 and edges 1525, 1530, 1535, and 1540 in the subgraph 1500 of Figure 15. and 1545. However, subgraph 1600, in contrast to subgraph 1500 (whose nodes 1515 and 1520 are connected by edge 1550), has nodes 1615 and 1620 that are not connected by edges.

表示1200中的單個位元(例如,第12圖、第13圖及第14圖中未被填滿的位元1205)可表示有向的三維單體的不存在,例如,該有向的三維單體包含節點1605、1610、1615及1620。表示1200中的第二個位元(例如,第12圖、第13圖及第14圖中被填滿的位1293)可表示二維單體的存在。例如,由節點1615、1605及1610以及邊1625、1630及1645所形成的二維單體。被填滿的位元1293與未被填滿的位元1205的組合提供了一種資訊,該資訊表示可能存在或不存在於表示1200中的其他特徵是否存在(以及其他位元的狀態)。具體而言,有向三維單體的不存在以及有向二維單體的存在的組合表示至少一個邊不存在於:(一)由節點1615、1620、1610所形成的可能的有向二維單體,或 (二)由節點1620、1605、1610所形成的可能的有向二維單體。因此,表示任一個這些可能的單體的存在與否的位元的狀態並不與位元1205及位元1293的狀態無關。 A single bit in representation 1200 (eg, unfilled bit 1205 in FIGS. 12, 13, and 14) may represent the absence of a directed three-dimensional entity, e.g., the directed three-dimensional The cell contains nodes 1605, 1610, 1615 and 1620. The second bit in representation 1200 (eg, filled bit 1293 in Figures 12, 13, and 14) may indicate the presence of a two-dimensional monomer. For example, a two-dimensional unit is formed by nodes 1615, 1605, and 1610 and edges 1625, 1630, and 1645. The combination of filled bits 1293 and unfilled bits 1205 provides information indicating the presence (and the state of other bits) of other features that may or may not be present in representation 1200 . Specifically, the combination of the absence of a directed three-dimensional monomer and the existence of a directed two-dimensional monomer indicates that at least one edge does not exist in: (1) the possible directed two-dimensional monomer formed by nodes 1615, 1620, and 1610 monomer, or (2) Possible directed two-dimensional monomer formed by nodes 1620, 1605, and 1610. Therefore, the state of the bit indicating the presence or absence of any of these possible cells is not independent of the state of bit 1205 and bit 1293.

儘管已根據具有不同數量的節點及分層關係的特徵而敘述了該等示例,然而此情況並非限制。舉例而言,可能出現表示1200包含了位元集合,且該位元集合僅對應於例如三維單體的存在與否的情況。 Although the examples have been described in terms of features with varying numbers of nodes and hierarchical relationships, this is not a limitation. For example, it may be the case that the representation 1200 includes a set of bits, and the set of bits only corresponds to, for example, the presence or absence of a three-dimensional unit.

當使用各個位元來表示圖中特徵的存在與否時會產生某些屬性。舉例而言,資訊的編碼是容錯的,並且提供了對編碼資訊的「適度降級」。具體而言,特定位元(或位元組)的丟失可能增加對於特徵的存在與否的不確定性。然而,仍可以從表示相鄰特徵的存在與否的其他位元來評估特徵存在與否的機率。 Certain properties arise when individual bits are used to represent the presence or absence of features in the image. For example, the encoding of the information is error-tolerant and provides "moderate degradation" of the encoded information. Specifically, the loss of specific bits (or groups of bytes) may increase uncertainty as to the presence or absence of a feature. However, the probability of the presence or absence of a feature can still be evaluated from other bits that represent the presence or absence of adjacent features.

同樣地,隨著位元數的增加,對於特徵的存在與否的確定性也增加。 Likewise, as the number of bits increases, the certainty about the presence or absence of a feature also increases.

作為另一示例,如上所述,位元的排序或排列與由位元所表示的圖的同構重建無關。所需要的只是位元與圖中特定節點/結構之間的已知對應關係。 As another example, as mentioned above, the ordering or arrangement of bits is irrelevant to the isomorphic reconstruction of the graph represented by the bits. All that is needed is a known correspondence between bits and specific nodes/structures in the graph.

在某些實施方式中,神經網路中的活動模式可在第12圖、第13圖及第14圖中的表示1200中被編碼。一般而言,神經網路中的活動模式是神經網路的許多特徵的結果,例如,神經網路的節點之間的結構連結、節點之間的權重、以及整個主機中可能的其他參數。例如,在某些實施方式中,可在表示1200中的活動模式的編碼之前訓練神經網路。 In certain embodiments, patterns of activity in neural networks may be encoded in representation 1200 in Figures 12, 13, and 14. In general, patterns of activity in a neural network are the result of many characteristics of the neural network, such as the structural connections between the nodes of the neural network, the weights between nodes, and possibly other parameters throughout the host. For example, in some embodiments, a neural network may be trained prior to encoding of activity patterns in representation 1200.

然而,無論神經網路是否經過訓練,對於給定的一輸入,響 應的活動的模式可被認為是神經網路內關於該輸入的「表示」或「摘要」。因此,儘管表示1200看似為數字(在某些情況下為二元數字)的直接出現的集合,但是每個數字可編碼神經網路中的特定輸入與相關活動之間的關係或對應關係。 However, regardless of whether the neural network has been trained or not, for a given input, the response The corresponding pattern of activity can be thought of as a "representation" or "summary" of the input within the neural network. Therefore, although the representation 1200 appears to be a straightforward collection of numbers (and in some cases binary numbers), each number may encode a relationship or correspondence between a specific input and the associated activity in the neural network.

第17圖、第18圖、第19圖及第20圖例示了在四種不同分類系統1700、1800、1900及2000中使用神經網路中的活動中的拓撲結構的出現的表示的示意圖。分類系統1700及1800將神經網路中的活動模式的表示進行分類,並作為輸入分類的一部分。分類系統1900及2000各自對神經網路中活動模式的表示的近似進行分類,並作為輸入分類的一部分。在分類系統1700及1800中,被表示的活動模式發生在作為分類系統1700及1800的一部分的源神經網路裝置1705中,並且從中被讀取。相反地,在分類系統1900及2000中,被近似地表示的活動模式發生在不屬於分類系統1700及1800的源神經網路裝置中,且被近似地表示的活動模式是從作為分類系統1900及2000的一部分的源近似器1905中被讀取。 Figures 17, 18, 19 and 20 illustrate schematic representations of the occurrence of topological structures in activity in neural networks using four different classification systems 1700, 1800, 1900 and 2000. Classification systems 1700 and 1800 classify representations of activity patterns in neural networks as part of the input classification. Classification systems 1900 and 2000 each classify approximations of representations of activity patterns in neural networks as part of the input classification. In classification systems 1700 and 1800, the activity patterns represented occur in, and are read from, source neural network devices 1705 that are part of classification systems 1700 and 1800. In contrast, in classification systems 1900 and 2000 , the activity patterns that are approximately represented occur in source neural network devices that do not belong to classification systems 1700 and 1800 , and the activity patterns that are approximately represented are from the source neural network devices as classification systems 1900 and 1800 . A portion of 2000 is read in the source approximator 1905.

在額外的細節中,如第17圖所示,分類系統1700包含源神經網路1705以及線性分類器1710。源神經網路1705是神經網路裝置,其被配置為在源神經網路1705中接收一輸入並且呈現活動中的拓撲結構的出現。在所示的實施方式中,源神經網路1705包含接收輸入的輸入層1715。然而,此情況並非限制。舉例而言,在某些實施方式中,可將一些或全部的輸入注入源神經網路1705中的不同層級/或邊或節點中。 In additional detail, as shown in Figure 17, classification system 1700 includes a source neural network 1705 and a linear classifier 1710. Source neural network 1705 is a neural network device configured to receive an input in source neural network 1705 and render the appearance of an active topology. In the embodiment shown, source neural network 1705 includes an input layer 1715 that receives input. However, this situation is not a limitation. For example, in some implementations, some or all of the inputs may be injected into different levels and/or edges or nodes in the source neural network 1705.

源神經網路1705可以是各種不同類型的神經網路。通常,源神經網路1705是一遞迴神經網路,例如在生物系統上建模的遞迴神經網路。 在某些情況下,源神經網路1705可模擬生物系統的形態特徵、化學特徵及其他特徵的程度。一般而言,源神經網路1705是被實作於具有相對高程度計算性能的一或多個計算裝置上實現,例如超級計算機。在這種情況下,分類系統1700通常是分散式系統,且在分類系統1700中線性分類器1710透過例如資料通訊網路以和源神經網路1705進行通訊。 Source neural network 1705 can be various different types of neural networks. Typically, the source neural network 1705 is a recurrent neural network, such as a recurrent neural network modeled on a biological system. In some cases, source neural network 1705 may simulate the extent of morphological characteristics, chemical characteristics, and other characteristics of biological systems. Typically, the source neural network 1705 is implemented on one or more computing devices with a relatively high degree of computing performance, such as a supercomputer. In this case, the classification system 1700 is typically a decentralized system, and in the classification system 1700 the linear classifier 1710 communicates with the source neural network 1705 through, for example, a data communication network.

在某些實施方式中,源神經網路1705可以是未經訓練的,且其所表示的活動可以是源神經網路1705的固有活動。在其他實施方式中,可訓練源神經網路1705,且其所表示的活動可體現該訓練。 In some embodiments, the source neural network 1705 may be untrained, and the activity it represents may be intrinsic to the source neural network 1705. In other embodiments, the source neural network 1705 may be trained, and the activity it represents may reflect that training.

從源神經網路1705讀取的表示可以是諸如第12圖、第13圖及第14圖中的表示1200。可透過多種方式從源神經網路1705讀取表示。舉例而言,在所示的示例中,源神經網路1705包含讀取源神經網路1705內的其他節點之間的活動模式的「讀取器節點」。在其他實施方式中,源神經網路1705內的活動由資料處理元件所讀取,該資料處理元件被編程為監視源神經網路1705中具有相對高有序性的活動模式。在其他實施方式中,源神經網路1705可包含輸出層,舉例而言,當源神經網路1705被實現為前饋神經網路時,可從該輸出層讀取表示1200。 The representation read from the source neural network 1705 may be such as the representation 1200 in Figures 12, 13, and 14. Representations can be read from the source neural network 1705 in a variety of ways. For example, in the example shown, source neural network 1705 includes "reader nodes" that read activity patterns among other nodes within source neural network 1705. In other embodiments, activity within the source neural network 1705 is read by a data processing element programmed to monitor patterns of activity in the source neural network 1705 that have a relatively high degree of order. In other embodiments, the source neural network 1705 may include an output layer, for example, when the source neural network 1705 is implemented as a feedforward neural network, the representation 1200 may be read from the output layer.

線性分類器1710是基於物件(即,源神經網路1705中的活動模式的表示)特徵的線性組合而對物件進行分類的裝置。線性分類器1710包含輸入1720和輸出1725。輸入1720被耦合以接收源神經網路1705中的活動模式的表示。換言之,源神經網路1705中的活動模式的表示是一特徵向量,該特徵向量表示源神經網路1705的輸入的特徵,且源神經網路1705被線性分類器1710用以對該輸入進行分類。線性分類器1710可透過各種方式接收 源神經網路1705中的活動模式的表示。舉例而言,活動模式的表示可作為離散事件或作為實時或非實時通訊信道上的連續串流來接收。 Linear classifier 1710 is a device that classifies objects based on a linear combination of features of the objects (ie, representations of activity patterns in source neural network 1705). Linear classifier 1710 contains input 1720 and output 1725. Input 1720 is coupled to receive a representation of the pattern of activity in source neural network 1705 . In other words, the representation of the activity pattern in the source neural network 1705 is a feature vector that represents the characteristics of the input of the source neural network 1705 and the source neural network 1705 is used by the linear classifier 1710 to classify the input. . Linear classifier 1710 can receive in various ways Representation of activity patterns in source neural network 1705. For example, representations of activity patterns may be received as discrete events or as a continuous stream over a real-time or non-real-time communication channel.

輸出1725被耦合以從線性分類器1710輸出分類結果。在所示的實施方式中,輸出1725被示意性地示為具有多個信道的平行端口。然而,此情況並非限制。例如,輸出1725可透過串列端口或具有綜合平行與串列的功能的端口來輸出分類結果。 Output 1725 is coupled to output classification results from linear classifier 1710 . In the embodiment shown, output 1725 is schematically shown as a parallel port with multiple channels. However, this situation is not a limitation. For example, the output 1725 may output the classification results through a serial port or a port with combined parallel and serial capabilities.

在一些實現中,線性分類器1710可實作在具有相對有限的計算性能的一或多個計算裝置上。舉例而言,線性分類器1710可實作在個人計算機或諸如智慧型電話或平板電腦的行動計算裝置上。 In some implementations, linear classifier 1710 may be implemented on one or more computing devices with relatively limited computing capabilities. For example, linear classifier 1710 may be implemented on a personal computer or mobile computing device such as a smartphone or tablet.

參照第18圖,分類系統1800包含源神經網路1705以及神經網路分類器1810。神經網路分類器1810為一神經網路裝置,其基於物件(即,源神經網路1705中的活動模式的表示)的特徵的非線性組合而對物件進行分類。在所示的實施方式中,神經網路分類器1810是包含輸入層1820以及輸出層1825的前饋網路。神經網路分類器1810與線性分類器1710一樣可透過各種方式接收源神經網路1705中的活動模式的表示。舉例而言,活動模式的表示可作為離散事件或作為實時或非實時通訊信道上的連續串流來接收。 Referring to Figure 18, the classification system 1800 includes a source neural network 1705 and a neural network classifier 1810. Neural network classifier 1810 is a neural network device that classifies objects based on a non-linear combination of characteristics of the objects (ie, representations of activity patterns in source neural network 1705). In the embodiment shown, neural network classifier 1810 is a feed-forward network including an input layer 1820 and an output layer 1825 . Neural network classifier 1810, like linear classifier 1710, may receive representations of activity patterns in source neural network 1705 in various ways. For example, representations of activity patterns may be received as discrete events or as a continuous stream over a real-time or non-real-time communication channel.

在某些實施方式中,神經網路分類器1810可在具有相對有限的計算性能的一或多個計算裝置上執行推斷(inferences)。舉例而言,神經網路分類器1810可實作在個人計算機或諸如智慧型電話或平板電腦的行動計算裝置上,例如,在這種裝置的神經處理單元中。與分類系統1700類似,分類系統1800一般而言是分散式系統,且在分類系統1800中遠端的神經網路分類器1810透過例如資料通訊網路以和源神經網路1705進行通訊。 In certain implementations, neural network classifier 1810 may perform inferences on one or more computing devices with relatively limited computing capabilities. For example, neural network classifier 1810 may be implemented on a personal computer or mobile computing device such as a smartphone or tablet, for example, in a neural processing unit of such a device. Similar to the classification system 1700, the classification system 1800 is generally a decentralized system, and in the classification system 1800, the remote neural network classifier 1810 communicates with the source neural network 1705 through, for example, a data communication network.

在某些實施方式中,神經網路分類器1810可為一深度神經網路,例如一卷積神經網路(convolutional neural network),其包含卷積層(convolutional layers)、池化層(pooling layers)以及完全連接層(fully-connected layers)。卷積層可透過例如使用線性卷積濾波器及/或非線性激勵函數來生成特徵圖(feature maps)。池化層可減少參數數量並且控制過度擬合(overfitting)。由神經網路分類器1810中的不同層所執行的計算可在神經網路分類器1810的不同實作中以不同方式定義。 In some embodiments, the neural network classifier 1810 can be a deep neural network, such as a convolutional neural network, which includes convolutional layers and pooling layers. and fully-connected layers. The convolutional layer may generate feature maps by, for example, using linear convolution filters and/or nonlinear activation functions. Pooling layers can reduce the number of parameters and control overfitting. The computations performed by different layers in neural network classifier 1810 may be defined differently in different implementations of neural network classifier 1810.

參照第19圖,分類系統1900包含源近似器1905以及線性分類器1710。如下方所詳述,源近似器1905是相對簡單的神經網路,其被訓練為在輸入層1915或其他地方接收輸入向量,並且輸出一向量,其中該向量近似於在相對更複雜的神經網路中的活動模式中所出現的拓撲結構的表示。舉例而言,可訓練源近似器1905以近似於遞迴源神經網路,例如,在生物系統上建模並且包含一定程度的形態特徵、化學特徵及生物系統的其他特徵的一遞迴神經網路。在所示的實施方式中,源近似器1905包含輸入層1915以及輸出層1920。輸入層1915可耦合以接收輸入資料。輸出層1920被耦合以輸出神經網路裝置內的活動表示的近似,以便由線性分類器的輸入1720所接收。舉例而言,輸出層1920可輸出第12圖、第13圖及第14圖中的表示1200的一近似1200’。另一方面,第17圖及第18圖中所示出的表示1200與第19圖及第20圖中所示出的表示1200的近似1200’是相同的。此僅是為了便於說明。一般而言,近似1200’將至少在某些方面與表示1200有所不同。儘管存在這些差異,線性分類器1710仍可對近似1200’進行分類。 Referring to Figure 19, classification system 1900 includes source approximator 1905 and linear classifier 1710. As described in detail below, the source approximator 1905 is a relatively simple neural network that is trained to receive an input vector at the input layer 1915 or elsewhere, and output a vector that is approximated in a relatively more complex neural network A representation of the topology that occurs in the pattern of activity in a road. For example, the source approximator 1905 may be trained to approximate a recurrent source neural network, such as a recurrent neural network modeled on a biological system and incorporating some degree of morphological, chemical, and other characteristics of the biological system. road. In the embodiment shown, source approximator 1905 includes an input layer 1915 and an output layer 1920. Input layer 1915 may be coupled to receive input data. The output layer 1920 is coupled to output an approximation of the activity representation within the neural network device for receipt by the input 1720 of the linear classifier. For example, the output layer 1920 may output an approximation of 1200' representing 1200 in Figures 12, 13, and 14. On the other hand, the representation 1200 shown in Figures 17 and 18 is the same as the approximation 1200' of the representation 1200 shown in Figures 19 and 20. This is for illustration only. Generally speaking, approximating 1200’ will differ in at least some aspects from expressing 1200. Despite these differences, linear classifier 1710 can classify approximately 1200'.

一般而言,源近似器1905可在具有相對有限的計算性能的一 或多個計算裝置上執行推斷。舉例而言,源近似器1905可實作在個人計算機或諸如智慧型電話或平板電腦的行動計算裝置上,例如在這種裝置的神經處理單元中。對比於分類系統1700及1800,分類系統1900,例如包含源近似器1905以及線性分類器1710,通常會被容納於單個外殼中,其中線性分類器1710實作在相同資料處理裝置上或透過硬線連接的方式進行耦合的資料處理裝置上。 In general, the source approximator 1905 can operate on a system with relatively limited computational performance. or perform inference on multiple computing devices. For example, source approximator 1905 may be implemented on a personal computer or mobile computing device such as a smartphone or tablet, such as in a neural processing unit of such a device. In contrast to classification systems 1700 and 1800, classification system 1900, including, for example, source approximator 1905 and linear classifier 1710, is typically housed in a single housing, with linear classifier 1710 implemented on the same data processing device or through hard-wiring. The connection is coupled to the data processing device.

參照第20圖,分類系統2000包含源近似器1905以及神經網路分類器1810。源近似器1905的輸出層1920被耦合以輸出神經網路裝置內的活動表示的近似1200’,以便由神經網路的輸入1820所接收。儘管近似1200’與表示1200之間存在任何差異,神經網路分類器1810仍可對近似1200’進行分類。相同於類似的分類系統1900,分類系統2000,例如包含源近似器1905以及神經網路分類器1810,通常會被容納於單個外殼中,其中神經網路分類器1810實作在相同資料處理裝置上或透過硬線連接的方式進行耦合的資料處理裝置上。 Referring to Figure 20, classification system 2000 includes source approximator 1905 and neural network classifier 1810. The output layer 1920 of the source approximator 1905 is coupled to output an approximation 1200&apos; of the activity representation within the neural network device for receipt by the input 1820 of the neural network. Neural network classifier 1810 may classify approximation 1200' despite any differences between approximation 1200' and representation 1200. Like the similar classification system 1900, the classification system 2000, for example, including the source approximator 1905 and the neural network classifier 1810, is typically housed in a single housing, with the neural network classifier 1810 being implemented on the same data processing device. or to a data processing device coupled through a hardwired connection.

第21圖例示了包含區域人工神經網路的邊緣裝置2100的示意圖,該區域人工神經網路可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練。在這種情況下,區域人工神經網路可以是例如完全在一或多個本地處理器上執行的人工神經網路,其不需要通訊網路來交換資料。一般而言,本地處理器將透過硬線連接。在某些情況下,本地處理器可容納於單個外殼中,例如單一個個人計算機或單一個手持行動裝置。在某些情況下,本地處理器可以由單一個人或有限數量的人所控制並且存取。實際上,透過使用在更複雜的源神經網路中拓撲結構的出現的表示來訓練 (例如:使用監督學習或強化學習技術)更簡單且/或訓練程度更低但更獨特的第二神經網路,計算資源及訓練樣本較有限的人也可根據需要來訓練神經網路。如此,減少了訓練期間的儲存需求及計算複雜度,並且節省了電池壽命等資源。 Figure 21 illustrates a schematic diagram of an edge device 2100 including a regional artificial neural network that can be trained by using representations that correspond to the occurrence of topological structures of activity in a source neural network. In this case, the local artificial neural network may be, for example, an artificial neural network that executes entirely on one or more local processors, which does not require a communication network to exchange data. Typically, the local processor will be hardwired. In some cases, the local processor may be housed in a single housing, such as a single personal computer or a single handheld mobile device. In some cases, a local processor may be controlled and accessed by a single person or a limited number of persons. In fact, by using representations of the occurrence of topological structures in more complex source neural networks, training (For example: using supervised learning or reinforcement learning techniques) A second neural network that is simpler and/or less trained but more unique. People with limited computing resources and training samples can also train neural networks as needed. In this way, storage requirements and computational complexity during training are reduced, and resources such as battery life are saved.

在所示的實施方式中,邊緣裝置2100被示意性地表示為安全攝影機裝置,其包含光學成像系統2110、圖像處理電子裝置2115、源近似器2120、表示分類器2125,以及通訊控制器與介面2130。 In the embodiment shown, edge device 2100 is schematically represented as a security camera device that includes an optical imaging system 2110, image processing electronics 2115, a source approximator 2120, a representation classifier 2125, and a communications controller and Interface 2130.

光學成像系統2110可包含例如一或多個透鏡(或者甚至針孔)和電荷耦合元件(charge-coupled device,CCD)裝置。圖像處理電子裝置2115可讀取光學成像系統2110的輸出,且通常可執行基本的圖像處理功能。通訊控制器與介面2130為被配置以控制進出邊緣裝置2100的資訊流的裝置。如下方所詳述,通訊控制器與介面2130可執行的運算為將感興趣的圖像傳輸到其他裝置以及從其他裝置接收訓練資訊。因此,通訊控制器與介面2130可包含可透過例如資料端口2135進行通訊的資料發送器以及接收器。資料端口2135可為有線端口、無線端口、光學端口等。 Optical imaging system 2110 may include, for example, one or more lenses (or even pinholes) and charge-coupled device (CCD) devices. Image processing electronics 2115 may read the output of the optical imaging system 2110 and generally perform basic image processing functions. Communications controller and interface 2130 are devices configured to control the flow of information to and from edge device 2100 . As described in detail below, communications controller and interface 2130 may perform operations to transmit images of interest to other devices and receive training information from other devices. Accordingly, communications controller and interface 2130 may include data senders and receivers that may communicate through data port 2135, for example. The data port 2135 can be a wired port, a wireless port, an optical port, etc.

源近似器2120為相對簡單的神經網路,其被訓練以輸出近似於在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示的向量。舉例而言,可訓練源近似器2120以近似於遞迴源神經網路,例如,在生物系統上建模並包含一定程度的形態特徵、化學特徵以及生物系統的其他特徵的遞迴神經網路。 Source approximator 2120 is a relatively simple neural network that is trained to output vectors that approximate representations of topological structures that occur in activity patterns in relatively more complex neural networks. For example, the source approximator 2120 may be trained to approximate a recurrent source neural network, such as a recurrent neural network modeled on a biological system and incorporating some degree of morphological characteristics, chemical characteristics, and other characteristics of the biological system. .

表示分類器2125為線性分類器或神經網路分類器,其被耦合以從源近似器2120接收源神經網路中的活動模式的表示的近似,並且輸出 分類結果。表示分類器2125可以是一深度神經網路,例如包含卷積層、池化層以及完全連接層的一卷積神經網路。卷積層可透過例如使用線性卷積濾波器及/或非線性激勵函數來生成特徵圖。池化層可減少參數數量並控制過度擬合。由表示分類器2125中的不同層所執行的計算可以在表示分類器2125的不同實作中以不同方式定義。 Representation classifier 2125 is a linear classifier or neural network classifier that is coupled to receive an approximation of the representation of the activity pattern in the source neural network from source approximator 2120 and output Classification results. The representation classifier 2125 may be a deep neural network, such as a convolutional neural network including convolutional layers, pooling layers, and fully connected layers. Convolutional layers may generate feature maps by, for example, using linear convolution filters and/or nonlinear activation functions. Pooling layers reduce the number of parameters and control overfitting. The computations performed by different layers in presentation classifier 2125 may be defined differently in different implementations of presentation classifier 2125.

在某些實施方式中,光學成像系統2110可在操作中產生原始(raw)的數位圖像。圖像處理電子裝置2115可讀取原始圖像,且通常將會執行至少一些基本的圖像處理功能。源近似器2120可從圖像處理電子裝置2115接收圖像並執行推斷操作以輸出近似於在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示的向量。近似的向量被輸入至表示分類器2125中,且表示分類器2125確認近似的向量是否滿足一或多組分類標準。示例包含臉部辨識以及其他的機器視覺操作。在表示分類器2125確認近似的向量滿足一組分類標準的情況下,表示分類器2125可指示通訊控制器與介面2130發送關於圖像的資訊。舉例而言,通訊控制器與介面2130可發送圖像本身、圖像的分類及/或關於圖像的其他資訊。 In certain embodiments, optical imaging system 2110 may operate to generate raw digital images. Image processing electronics 2115 can read the raw image and will typically perform at least some basic image processing functions. Source approximator 2120 may receive images from image processing electronics 2115 and perform inference operations to output vectors that approximate representations of topological structures that would occur in activity patterns in relatively more complex neural networks. The approximated vector is input into the representation classifier 2125, and the representation classifier 2125 determines whether the approximated vector satisfies one or more sets of classification criteria. Examples include facial recognition and other machine vision operations. In the event that the representation classifier 2125 confirms that the approximate vector satisfies a set of classification criteria, the representation classifier 2125 may instruct the communications controller and interface 2130 to send information about the image. For example, communications controller and interface 2130 may send the image itself, a classification of the image, and/or other information about the image.

在某些時候可能需要改變分類的過程。在這些情況下,通訊控制器與介面2130可接收訓練集。在某些實施方式中,訓練集可包含原始或處理過的圖像資料以及在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示。這樣的訓練集可用於例如透過使用監督學習或強化學習技術來重新訓練源近似器2120。具體而言,該等表示被用作目標答案向量,以及表示源近似器2120處理原始或處理過的圖像資料的期望結果。 At some point it may be necessary to change the classification process. In these cases, communications controller and interface 2130 may receive the training set. In some embodiments, the training set may include raw or processed image data as well as representations of topological structures that occur in activity patterns in relatively more complex neural networks. Such a training set may be used to retrain the source approximator 2120, for example, by using supervised learning or reinforcement learning techniques. Specifically, these representations are used as target answer vectors and represent expected results of source approximator 2120 processing raw or processed image data.

在其他實施方式中,訓練集可包含在相對更複雜的神經網路 中的活動模式中出現的拓撲結構的表示以及拓撲結構的該等表示的期望分類。這樣的訓練集可用於例如透過使用監督學習或強化學習技術來重新訓練神經網路的表示分類器2125。尤其地,期望分類被用作目標答案向量,以及表示處理拓撲結構的表示的表示分類器2125的期望結果。 In other embodiments, the training set may be included in a relatively more complex neural network A representation of the topology that occurs in the activity patterns in , and the desired classification of such representations of the topology. Such a training set may be used to retrain the neural network's representation classifier 2125, for example, by using supervised learning or reinforcement learning techniques. In particular, the desired classification is used as the target answer vector, and the desired result of the representation classifier 2125 represents a representation of the processing topology.

無論源近似器2120或表示分類器2125是否被重新訓練,邊緣裝置2100的推斷操作可容易地適應於會改變的環境以及目標,而無需大量的訓練資料以及耗時且耗費計算能量的密集型迭代訓練。 Regardless of whether the source approximator 2120 or the representation classifier 2125 is retrained, the inference operations of the edge device 2100 can be easily adapted to changing environments and targets without requiring large amounts of training data and time-consuming and computationally energy-intensive iterations. Training.

第22圖例示了包含區域人工神經網路的邊緣裝置2200的示意圖,該區域人工神經網路可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練。在所示的實施方式中,邊緣裝置2200被示意性地示為諸如智慧型電話或平板電腦的行動計算裝置。邊緣裝置2200包含光學成像系統(例如,在邊緣裝置2200的背面,未示出)、圖像處理電子裝置2215、表示分類器2225、通訊控制器與介面2230以及資料端口2235。這些元件可具有特徵並且執行與第21圖中的光學成像系統2110、圖像處理電子裝置2115、表示分類器2125、通訊控制器與介面2130以及邊緣裝置2100中的資料端口2135相應的動作。 Figure 22 illustrates a schematic diagram of an edge device 2200 including a regional artificial neural network that can be trained by using representations that correspond to the occurrence of topological structures of activity in a source neural network. In the illustrated embodiment, edge device 2200 is schematically shown as a mobile computing device such as a smartphone or tablet. Edge device 2200 includes an optical imaging system (eg, on the back of edge device 2200, not shown), image processing electronics 2215, presentation classifier 2225, communications controller and interface 2230, and data port 2235. These elements may have characteristics and perform actions corresponding to optical imaging system 2110, image processing electronics 2115, presentation classifier 2125, communications controller and interface 2130, and data port 2135 in edge device 2100 in Figure 21.

邊緣裝置2200的所示實施方式另外包含一或多個附加感測器2240以及多輸入(multi-input)源近似器2245。一或多個感測器2240可以感測邊緣裝置2200自身或邊緣裝置2200周圍環境的一或多個特徵。舉例而言,在某些實施方式中,感測器2240可以是加速度計,其感測邊緣裝置2200所受的加速度。作為另一示例,在某些實施方式中,感測器2240可以是聲學感測器,例如感測邊緣裝置2200的環境中的噪聲的麥克風。感測器2240的其 他示例包含化學感測器(例如:「人造鼻子」等)、濕度感測器、輻射感測器等。在某些情況下,感測器2240耦合到處理電子裝置,該處理電子裝置可讀取感測器2240的輸出(或其他資訊,例如:聯繫人列表或地圖)並且執行基本的處理功能。因此,由於各種感測器所實際感測的實體參數不相同,故感測器2240的不同實作方式可具有不同的「模態(modalities)」。 The illustrated implementation of edge device 2200 additionally includes one or more additional sensors 2240 and a multi-input source approximator 2245. One or more sensors 2240 may sense one or more characteristics of the edge device 2200 itself or the environment surrounding the edge device 2200 . For example, in some embodiments, sensor 2240 may be an accelerometer that senses acceleration experienced by edge device 2200 . As another example, in some embodiments, sensor 2240 may be an acoustic sensor, such as a microphone that senses noise in the environment of edge device 2200 . Sensor 2240 other Other examples include chemical sensors (such as "artificial noses", etc.), humidity sensors, radiation sensors, etc. In some cases, the sensor 2240 is coupled to processing electronics that can read the output of the sensor 2240 (or other information, such as a contact list or map) and perform basic processing functions. Therefore, since the physical parameters actually sensed by various sensors are different, different implementations of the sensor 2240 may have different "modalities."

多輸入源近似器2245是相對簡單的神經網路,其被訓練以輸出近似於在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示的向量。舉例而言,可訓練多輸入源近似器2245以近似遞迴源神經網路,例如,在生物系統上建模並包含一定程度的形態特徵、化學特徵以及生物系統的其他特徵的遞迴神經網路。 Multiple input source approximators 2245 are relatively simple neural networks that are trained to output vectors that approximate representations of topological structures that occur in activity patterns in relatively more complex neural networks. For example, the multi-input source approximator 2245 can be trained to approximate a recurrent source neural network, such as a recurrent neural network modeled on a biological system and incorporating some degree of morphological characteristics, chemical characteristics, and other characteristics of the biological system. road.

不同於源近似器2120,多輸入源近似器2245被耦合以從多個感測器接收原始或處理過的感測器資料,並基於該資料回傳在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示的近似。舉例而言,多輸入源近似器2245可從圖像處理電子裝置2215接收處理過的圖像資料以及例如來自一或多個感測器2240的聲學資料、加速度資料、化學資料或其他資料。多輸入源近似器2245可以是諸如卷積神經網路的深度神經網路,卷積神經網路包含卷積層、池化層以及完全連接層。由多輸入源近似器2245中的不同層所執行的計算可專用於單一類型的感測器資料或多種模態的感測器資料。 Unlike the source approximator 2120, the multi-input source approximator 2245 is coupled to receive raw or processed sensor data from multiple sensors and return activity in a relatively more complex neural network based on the data. An approximation of the representation of the topology occurring in the pattern. For example, multi-input source approximator 2245 may receive processed image data from image processing electronics 2215 as well as acoustic data, acceleration data, chemical data, or other data from one or more sensors 2240 . The multi-input source approximator 2245 may be a deep neural network such as a convolutional neural network, which includes convolutional layers, pooling layers, and fully connected layers. The computations performed by different layers in the multi-input source approximator 2245 may be specialized for a single type of sensor data or for multiple modalities of sensor data.

無論多輸入源近似器2245的特定組織如何,多輸入源近似器2245將基於來自多個感測器的原始或處理過的感測器資料以回傳近似。舉例而言,來自圖像處理電子裝置2215的處理過的圖像資料與來自麥克風感 測器2240的聲學資料可以被多輸入源近似器2245使用,以近似在相對更複雜且收到相同資料的神經網路中的活動模式中出現的拓撲結構的表示。 Regardless of the specific organization of the multi-input source approximator 2245, the multi-input source approximator 2245 will return an approximation based on raw or processed sensor data from multiple sensors. For example, the processed image data from the image processing electronic device 2215 is combined with the image data from the microphone sensor. Acoustic data from detector 2240 may be used by multi-input source approximator 2245 to approximate representations of topological structures that would occur in activity patterns in relatively more complex neural networks that receive the same data.

在某些時候可能需要改變分類的過程。在這些情況下,通訊控制器與介面2230可接收訓練集。在某些實施方式中,訓練集可包含原始或處理過的圖像資料以及在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示。這樣的訓練集可用於例如透過使用監督學習或強化學習技術來重新訓練多輸入源近似器2245。具體而言,該等表示被用作目標答案向量,以及表示多輸入源近似器2245處理原始或處理過的圖像資料的期望結果。 At some point it may be necessary to change the classification process. In these cases, communications controller and interface 2230 may receive the training set. In some embodiments, the training set may include raw or processed image data as well as representations of topological structures that occur in activity patterns in relatively more complex neural networks. Such a training set may be used to retrain the multiple input source approximator 2245, for example, by using supervised learning or reinforcement learning techniques. Specifically, these representations are used as target answer vectors and represent the expected results of the multi-input source approximator 2245 processing raw or processed image data.

在其他實施方式中,訓練集可包含在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示以及拓撲結構的那些表示的期望分類。這樣的訓練集可例如透過使用監督學習或強化學習技術來重新訓練神經網路表示分類器2225。具體而言,期望的分類被用作目標答案向量,以及表示了表示分類器2225處理拓撲結構的表示的期望結果。 In other embodiments, the training set may contain representations of topologies that occur in activity patterns in relatively more complex neural networks and desired classifications of those representations of topologies. Such a training set may be used to retrain the neural network representation classifier 2225, for example, using supervised learning or reinforcement learning techniques. Specifically, the desired classification is used as the target answer vector, and the desired outcome of the representation representing the topology processed by the representation classifier 2225 is represented.

無論多輸入源近似器2245或表示分類器2225是否被重新訓練,邊緣裝置2200的推斷操作可容易地適應於會改變的環境以及目標,而無需大量的訓練資料以及耗時且耗費計算能量的密集型迭代訓練。 Regardless of whether the multi-input source approximator 2245 or the representation classifier 2225 is retrained, the inference operations of the edge device 2200 can be easily adapted to changing environments and goals without requiring large amounts of training data and time-consuming and computationally energy-intensive Iterative training.

第23圖例示了一種系統2300的示意圖,且於系統2300中可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練該區域神經網路。目標神經網路可實作在相對簡單且較便宜的資料處理系統上,而源神經網路可實作在相對複雜且更昂貴的資料處理系統上。 Figure 23 illustrates a schematic diagram of a system 2300 in which a regional neural network can be trained by using representations that correspond to the occurrence of topological structures in the source neural network. The target neural network can be implemented on a relatively simple and less expensive data processing system, while the source neural network can be implemented on a relatively complex and more expensive data processing system.

系統2300包含具有區域神經網路的各種區域神經網路裝置 2305、電話基地台2310、無線存取點2315、伺服器系統2320以及一或多個資料通訊網路2325。 System 2300 includes various regional neural network devices having regional neural networks 2305, telephone base station 2310, wireless access point 2315, server system 2320, and one or more data communication networks 2325.

區域神經網路裝置2305是被配置為使用計算量較小的目標神經網路來處理資料的裝置。如圖所示,區域神經網路裝置2305可實作為行動計算裝置、相機、汽車或其他任何裝置、固定裝置與移動式元件中的任何一個,以及每個類別內的不同品牌和型號的裝置。不同的區域神經網路裝置2305可屬於不同的所有者。在某些實施方式中,對區域神經網路裝置2305的資料處理功能的存取通常將限於這些所有者及/或所有者的指定對象。 Regional neural network device 2305 is a device configured to process data using a computationally less expensive target neural network. As shown, regional neural network device 2305 may be implemented as any of a mobile computing device, a camera, a car or any other device, a fixed device and a mobile device, as well as different makes and models of devices within each category. Different regional neural network devices 2305 may belong to different owners. In certain embodiments, access to the data processing functions of the regional neural network device 2305 will generally be limited to these owners and/or designees of the owners.

每個區域神經網路裝置2305可包含一或多個源近似器,其被訓練以輸出近似於在相對更複雜的神經網路中的活動模式中出現的拓撲結構的表示的向量。舉例而言,相對更複雜的神經網路可以是遞迴源神經網路,例如,在生物系統上建模並包含生物系統的一定程度的形態特徵、化學特徵以及其他特徵的遞迴神經網路。 Each regional neural network device 2305 may include one or more source approximators that are trained to output vectors that approximate representations of topological structures that occur in activity patterns in relatively more complex neural networks. For example, a relatively more complex neural network may be a recurrent neural network, for example, a recurrent neural network that is modeled on a biological system and includes a certain degree of morphological characteristics, chemical characteristics, and other characteristics of the biological system. .

在某些實施方式中,除了使用源近似器處理資料之外,還可編程區域神經網路裝置2305以使用在相對更複雜的神經元中的活動模式中出現的拓撲結構的表示作為目標答案向量來重新訓練源近似器。舉例而言,區域神經網路裝置2305可被編程為執行一或多種迭代訓練技術(例如:梯度下降或隨機梯度下降)。在其他實施方式中,區域神經網路裝置2305中的源近似器可由例如專用訓練系統或安裝在個人計算機上的訓練系統來訓練,其中該個人計算機可與區域神經網路裝置2305互動以訓練源近似器。 In some embodiments, in addition to using source approximators to process data, the regional neural network device 2305 can be programmed to use representations of topologies that occur in activity patterns in relatively more complex neurons as target answer vectors. to retrain the source approximator. For example, regional neural network device 2305 may be programmed to perform one or more iterative training techniques (eg, gradient descent or stochastic gradient descent). In other embodiments, the source approximators in the regional neural network device 2305 can be trained by, for example, a dedicated training system or a training system installed on a personal computer that can interact with the regional neural network device 2305 to train the sources. approximator.

每個區域神經網路裝置2305包含一或多個無線或有線資料通訊元件。在所示的實施方式中,每個區域神經網路裝置2305包含至少一個 無線資料通訊元件,例如行動電話收發器、無線收發器、或兩者兼具。行動電話收發器能夠與電話基地台2310交換資料。無線收發器能夠與無線存取點2315交換資料。每個區域神經網路裝置2305還能夠與對等(peer)的行動計算裝置交換資料。 Each regional neural network device 2305 includes one or more wireless or wired data communication components. In the illustrated embodiment, each regional neural network device 2305 includes at least one Wireless data communication components, such as mobile phone transceivers, wireless transceivers, or both. The mobile phone transceiver is capable of exchanging data with the phone base station 2310. The wireless transceiver can exchange data with the wireless access point 2315. Each regional neural network device 2305 is also capable of exchanging data with peer mobile computing devices.

電話基地台2310與無線存取點2315連接以和一或多個資料通訊網路2325進行資料通訊,並且可透過網路而與伺服器系統2320交換資訊。因此,區域神經網路裝置2305通常也與伺服器系統2320進行資料通訊。然而,此情況並非限制。舉例而言,在區域神經網路裝置2305由其他資料處理裝置訓練的實施方式中,區域神經網路裝置2305僅需要與這些其他的資料處理裝置進行至少一次的資料通訊。 The telephone base station 2310 is connected to the wireless access point 2315 for data communication with one or more data communication networks 2325, and can exchange information with the server system 2320 through the network. Therefore, the regional neural network device 2305 typically also communicates with the server system 2320. However, this situation is not a limitation. For example, in embodiments where the regional neural network device 2305 is trained by other data processing devices, the regional neural network device 2305 only needs to perform at least one data communication with these other data processing devices.

伺服器系統2320是一或多個資料處理裝置的系統,其被編程為根據一或多組機器可讀指令執行資料處理活動。資料處理活動可包含向區域神經網路裝置2305的訓練系統提供訓練集。如上所述,訓練系統可在行動的區域神經網路裝置2305本身內部或在一或多個其他的資料處理裝置上。訓練集可包含與源神經網路中的活動相對應的拓撲結構的出現的表示以及相應的輸入資料。 Server system 2320 is a system of one or more data processing devices programmed to perform data processing activities in accordance with one or more sets of machine-readable instructions. Data processing activities may include providing training sets to a training system of regional neural network device 2305. As discussed above, the training system may be within the mobile regional neural network device 2305 itself or on one or more other data processing devices. The training set may contain representations of occurrences of topological structures that correspond to activity in the source neural network, as well as corresponding input data.

在某些實施方式中,伺服器系統2320還包含源神經網路。然而,此情況並非限制,且伺服器系統2320可從實作源神經網路的另一個資料處理裝置系統接收訓練集。 In some embodiments, server system 2320 also includes a source neural network. However, this is not a limitation, and server system 2320 may receive the training set from another data processing device system implementing the source neural network.

於操作中,在伺服器系統2320接收訓練集(來自在伺服器系統2320本身中或在其他地方的源神經網路)之後,伺服器系統2320可將訓練集提供給訓練區域神經網路裝置2305的訓練者。可使用訓練集來訓練目標 的區域神經網路裝置2305中的源近似器,使得目標神經網路近似於源神經網路的操作。 In operation, after server system 2320 receives a training set (from a source neural network in server system 2320 itself or elsewhere), server system 2320 may provide the training set to training area neural network device 2305 trainer. A training set can be used to train the target The source approximator in the regional neural network device 2305 enables the target neural network to approximate the operation of the source neural network.

第24圖、第25圖、第26圖及第27圖例示了使用四種不同系統(即,系統2400、2500、2600及2700)中的神經網路中的活動中的拓撲結構的出現的表示的示意圖。系統2400、2500、2600及2700可以是配置為執行許多不同運算中的任何一者。舉例而言,系統2400、2500、2600及2700可執行物件定位運算、物件偵測運算、物件分割運算、物件偵測運算、預測運算、動作選擇運算等。 Figures 24, 25, 26 and 27 illustrate representations of the emergence of topological structures in activity in neural networks using four different systems, namely systems 2400, 2500, 2600 and 2700. schematic diagram. Systems 2400, 2500, 2600, and 2700 may be configured to perform any of a number of different operations. For example, systems 2400, 2500, 2600 and 2700 can perform object positioning operations, object detection operations, object segmentation operations, object detection operations, prediction operations, action selection operations, etc.

物件定位運算定位圖像內的物件。舉例而言,可圍繞物件建構一邊界框(bounding box)。在某些情況下,物件定位可與物件辨識相結合,在物件辨識時會使用適當的名稱標記本地化物件。 Object positioning operations position objects within an image. For example, a bounding box can be constructed around the object. In some cases, object positioning can be combined with object recognition, where localized objects are tagged with appropriate names.

物件偵測運算將圖像像素分類為屬於特定類別(例如,屬於物件感興趣)或不屬於特定類。一般而言,透過對像素進行分組並在像素組周圍形成邊界框來執行物件偵測。邊界框應該緊緊圍繞著物件。 Object detection operations classify image pixels as belonging to a specific class (eg, belonging to an object of interest) or not. Generally speaking, object detection is performed by grouping pixels and forming a bounding box around the group of pixels. The bounding box should tightly surround the object.

一般而言,物件分割將類別標籤分配給每個圖像像素。因此,除了邊界框之外,物件分割在逐一像素的基礎上進行,且通常僅為每個像素分配單一個標籤。 In general, object segmentation assigns a class label to each image pixel. Therefore, with the exception of bounding boxes, object segmentation is performed on a pixel-by-pixel basis, and typically only a single label is assigned to each pixel.

預測運算試圖得到超出觀察資料範圍的結論。儘管預測運算試圖預測未來的事件發生(例如,基於關於過去和當前狀態的資訊),但預測運算還可基於關於那些狀態的不完整資訊來尋求關於過去及當前狀態的結論。 Predictive operations attempt to draw conclusions beyond the range of observed data. Although predictive operations attempt to predict future events (e.g., based on information about past and current states), predictive operations may also seek to draw conclusions about past and current states based on incomplete information about those states.

動作選擇運算試圖基於一組條件來選擇動作。傳統上,動作 選擇運算被分解為不同的方法,例如基於符號(symbol-based)的系統(經典規劃(classical planning))、分散式解決方案以及被動或動態規劃。 The action selection operation attempts to select an action based on a set of conditions. Traditionally, actions Choice operations are broken down into different methods, such as symbol-based systems (classical planning), decentralized solutions, and passive or dynamic programming.

分類系統2400及2500各自對神經網路中的活動模式的表示執行期望的運算。系統2600及2700各自對神經網路中的活動模式的表示的近似執行期望的運算。在系統2400及2500中,被表示的活動模式發生在屬於系統2400及2500的一部分的源神經網路裝置1705中,並且從該源神經網路中讀取被表示的活動。相反地,在系統2600及2700中,被近似地表示的活動模式出現在非屬於系統2400及2500的一部分的源神經網路裝置中。然而,這些活動模式的表示的近似是從屬於系統2600及2700的一部分的源近似器1905所讀取。 Classification systems 2400 and 2500 each perform desired operations on representations of activity patterns in neural networks. Systems 2600 and 2700 each perform desired operations on approximations of representations of activity patterns in neural networks. In systems 2400 and 2500, the represented activity patterns occur in a source neural network device 1705 that is part of systems 2400 and 2500, and the represented activity is read from the source neural network. In contrast, in systems 2600 and 2700, activity patterns that are approximately represented occur in source neural network devices that are not part of systems 2400 and 2500. However, approximations of these representations of activity patterns are read from source approximators 1905 that are part of systems 2600 and 2700.

除此之外,如第24圖所示,系統2400包含源神經網路1705以及線性處理器2410。線性處理器2410是基於神經網路中的活動模式的表示的特徵的線性組合來執行運算的裝置(或這些表示的近似)。該運算可以是例如物件定位運算、物件偵測運算、物件分割運算、預測運算、動作選擇運算等。 In addition, as shown in Figure 24, the system 2400 includes a source neural network 1705 and a linear processor 2410. Linear processor 2410 is a device that performs operations based on linear combinations of features of representations of activity patterns in neural networks (or approximations of these representations). The operation may be, for example, object positioning operation, object detection operation, object segmentation operation, prediction operation, action selection operation, etc.

線性處理器2410包含輸入2420以及輸出2425。輸入2420被耦合以接收源神經網路1705中的活動模式的表示。線性處理器2410可透過各種方式接收源神經網路1705中的活動模式的表示。舉例而言,活動模式的表示可作為離散事件或者作為實時或非實時通訊信道上的連續串流來接收。輸出2425被耦合以從線性處理器2410輸出處理結果。在某些實施方式中,線性處理器2410可實作在具有相對有限的計算性能的一或多個計算裝置上。例如,線性處理器2410可實作在個人計算機或諸如智慧型電話或平板電腦 的行動計算裝置上。 Linear processor 2410 includes input 2420 and output 2425. Input 2420 is coupled to receive a representation of the pattern of activity in source neural network 1705 . Linear processor 2410 may receive representations of activity patterns in source neural network 1705 through various means. For example, representations of activity patterns may be received as discrete events or as a continuous stream over a real-time or non-real-time communication channel. Output 2425 is coupled to output processing results from linear processor 2410. In certain implementations, linear processor 2410 may be implemented on one or more computing devices with relatively limited computing capabilities. For example, linear processor 2410 may be implemented on a personal computer or a computer such as a smart phone or tablet. on a mobile computing device.

參照第25圖,分類系統2500包含源神經網路1705以及神經網路2510。神經網路2510是神經網路裝置,其被配置為基於神經網路中的活動模式的表示的特徵的非線性組合來執行運算(或這些表示的近似)。運算可以是例如物件定位運算、物件偵測運算、物件分割運算、預測運算、動作選擇運算等。在所示的實施方式中,神經網路2510是包含輸入層2520以及輸出層2525的前饋網路。相同於線性處理器2410,神經網路2510可透過各種方式在源神經網路1705中接收活動模式的表示。 Referring to Figure 25, the classification system 2500 includes a source neural network 1705 and a neural network 2510. Neural network 2510 is a neural network device configured to perform operations based on non-linear combinations of characteristics of representations of activity patterns in neural networks (or approximations of these representations). The operations may be, for example, object positioning operations, object detection operations, object segmentation operations, prediction operations, action selection operations, etc. In the embodiment shown, neural network 2510 is a feed-forward network including an input layer 2520 and an output layer 2525. Like linear processor 2410, neural network 2510 may receive representations of activity patterns in source neural network 1705 through various means.

在某些實施方式中,神經網路2510可在具有相對有限的計算性能的一或多個計算裝置上執行推斷。例如,神經網路2510可實作在個人計算機或諸如智慧型電話或平板電腦的行動計算裝置上實現,例如在此種裝置的神經處理單元中。與系統2400類似,系統2500通常是分散式系統,且在系統2500中遠端神經網路2510可例如經由資料通訊網路來與源神經網路1705進行通訊。在某些實施方式中,舉例而言,神經網路2510可以是一深度神經網路,例如一卷積神經網路。 In certain implementations, neural network 2510 may perform inference on one or more computing devices with relatively limited computing capabilities. For example, neural network 2510 may be implemented on a personal computer or mobile computing device such as a smartphone or tablet, such as in a neural processing unit of such a device. Similar to system 2400, system 2500 is generally a decentralized system, and in system 2500 remote neural network 2510 may communicate with source neural network 1705, such as via a data communications network. In some embodiments, for example, the neural network 2510 may be a deep neural network, such as a convolutional neural network.

參照第26圖,系統2600包含源近似器1905以及線性處理器2410。儘管近似1200’與表示1200之間存在任何差異,處理器2410仍可在近似1200’上執行運算。 Referring to Figure 26, system 2600 includes source approximator 1905 and linear processor 2410. Notwithstanding any differences between the approximation 1200' and the representation 1200, the processor 2410 may perform operations on the approximation 1200'.

參照第27圖,系統2700包含源近似器1905以及神經網路2510。儘管近似1200’與表示1200之間存在任何差異,神經網路2510仍可在近似1200’上執行運算。 Referring to Figure 27, system 2700 includes source approximator 1905 and neural network 2510. Notwithstanding any differences between approximating 1200' and representing 1200, neural network 2510 may still perform operations on approximating 1200'.

在某些實施方式中,系統2600及2700可實作在邊緣裝置上, 例如第21圖及第22圖中的邊緣裝置2100及2200。在某些實施方式中,系統2600及2700可被實作為系統的一部分,且在該系統(例如:第23圖中的系統2300)中可使用與源神經網路中的活動相對應的拓撲結構的表示來訓練區域神經網路。 In some embodiments, systems 2600 and 2700 may be implemented on edge devices, For example, edge devices 2100 and 2200 in Figures 21 and 22. In some embodiments, systems 2600 and 2700 may be implemented as part of a system in which a topology corresponding to activity in the source neural network may be used (eg, system 2300 in Figure 23). representation to train regional neural networks.

第28圖例示了一種包含人工神經網路的增強學習系統2800的示意圖,該人工神經網路可透過使用對應於源神經網路中的活動的拓撲結構的出現的表示來訓練。強化學習是一種機器學習,在強化學習中,人工神經網路從回饋(feedback)中學習,且該回饋是關於響應人工神經網路決策所採取的動作的後果。強化學習系統透過執行動作以及接收特徵化另一個新的狀態的資訊以及象徵動作成功(或不成功)的獎勵及/或遺憾,以從環境中的一個狀態移動到該另一個新的狀態。強化學習旨在透過學習過程以將總獎勵最大化(或將遺憾最小化)。 Figure 28 illustrates a schematic diagram of a reinforcement learning system 2800 that includes an artificial neural network that can be trained by using representations that correspond to the occurrence of topological structures in the source neural network. Reinforcement learning is a type of machine learning in which an artificial neural network learns from feedback about the consequences of actions taken in response to the artificial neural network's decisions. Reinforcement learning systems move from one state in the environment to another new state by performing actions and receiving information that characterizes the new state and rewards and/or regrets that signify the success (or failure) of the action. Reinforcement learning aims to maximize the total reward (or minimize regret) through the learning process.

在所示的實施方式中,強化學習系統2800中的人工神經網路是使用強化學習方法所訓練的一深度神經網路2805(或其他的深度學習架構)。在某些實施方式中,深度神經網路2805可以是區域人工神經網路(例如:第25圖及第27圖中的神經網路2510),並且實作在如汽車、飛機、機器人或其他裝置上。此情況並非限制,且在其他實作方式中,深度神經網路2805可實作在連網裝置的系統上。 In the embodiment shown, the artificial neural network in reinforcement learning system 2800 is a deep neural network 2805 (or other deep learning architecture) trained using reinforcement learning methods. In some embodiments, the deep neural network 2805 can be a regional artificial neural network (for example, the neural network 2510 in Figures 25 and 27), and is implemented in a car, an airplane, a robot, or other devices. superior. This is not a limitation, and in other implementations, the deep neural network 2805 can be implemented on a system of connected devices.

除了源近似器1905與深度神經網路2805之外,強化學習系統2800還包含致動器(actuator)2810、一或多個感測器2815以及教師模組2820。在某些實施方式中,強化學習系統2800還包含額外資料的一或多個資料來源2825。 In addition to the source approximator 1905 and the deep neural network 2805, the reinforcement learning system 2800 also includes an actuator 2810, one or more sensors 2815, and a teacher module 2820. In some embodiments, reinforcement learning system 2800 also includes one or more sources of additional data 2825.

致動器2810是用於控制與環境2830相互作用的機制或系統的裝置。在某些實施方式中,致動器2810控制實體的機制或系統(例如:汽車的轉向或機器人的定位)。在其他實施方式中,致動器2810可控制虛擬的機制或系統(例如:虛擬遊戲板或投資組合)。有鑑於此,環境2830也可是實體的或虛擬的。 Actuator 2810 is a device for controlling a mechanism or system that interacts with environment 2830 . In some embodiments, actuators 2810 control an entity's mechanism or system (eg, the steering of a car or the positioning of a robot). In other embodiments, the actuator 2810 may control a virtual mechanism or system (eg, a virtual game board or investment portfolio). For this reason, environment 2830 may also be physical or virtual.

感測器2815是測量環境2830的特性的裝置。感測器所做的至少一部分的測量可特徵化受控制的機制或系統與環境2830的其他方面之間的互動。舉例而言,當致動器2810操縱汽車時,一或多個感測器2815可測量汽車的速度、汽車的方向、汽車的加速度、汽車與其他特徵的接近度、以及其他特徵對汽車的響應中的一或多個。再舉例而言,當致動器2810控制投資組合時,感測器2815可測量與投資組合相關聯的價值以及風險。 Sensor 2815 is a device that measures characteristics of environment 2830. At least a portion of the measurements made by the sensor may characterize the interaction between the controlled mechanism or system and other aspects of the environment 2830 . For example, when actuators 2810 operate the car, one or more sensors 2815 may measure the car's speed, the car's direction, the car's acceleration, the car's proximity to other features, and the response of other features to the car. one or more of. As another example, when actuator 2810 controls a portfolio, sensor 2815 can measure the value and risk associated with the portfolio.

一般而言,源近似器1905以及教師模組2820被耦合以接收由感測器2815進行的至少一些測量。舉例而言,源近似器1905可在輸入層1915處接收測量資料並輸出在源神經網路中的活動模式中出現的拓撲結構的表示的近似1200’。 Generally speaking, source approximator 1905 and teacher module 2820 are coupled to receive at least some measurements made by sensor 2815. For example, the source approximator 1905 may receive measurement data at the input layer 1915 and output an approximation 1200&apos; of a representation of the topology occurring in the activity patterns in the source neural network.

教師模組2820是一種被配置為解釋從感測器2815接收的測量,並且向深度神經網路2805提供獎勵及/或遺憾的裝置。獎勵是正面的,其表示對機制或系統的成功控制。遺憾則是負面的,其表示不成功或非最佳的控制。一般而言,教師模組2820還提供測量的特徵化以及用於強化學習的獎勵/遺憾。一般而言,測量的特徵化是源神經網路中的活動模式中出現的拓撲結構的表示的近似(例如:近似1200’)。舉例而言,教師模組2820可讀取從源近似器1905輸出的近似1200’,並將所讀取的近似1200’與相應的獎勵 /後悔進行配對。 Teacher module 2820 is a device configured to interpret measurements received from sensor 2815 and provide rewards and/or regrets to deep neural network 2805 . Rewards are positive and represent successful control of a mechanism or system. Regret is negative and indicates unsuccessful or suboptimal control. In general, the teacher module 2820 also provides characterization of measurements and reward/regret for reinforcement learning. In general, a characterization of a measurement is an approximation (e.g.: approximation 1200′) of a representation of the topology present in the activity patterns in the source neural network. For example, the teacher module 2820 can read the approximate 1200' output from the source approximator 1905 and combine the read approximate 1200' with the corresponding reward. /regret the pairing.

在許多實施方式中,強化學習在系統2800中不會即時地發生,或者於深度神經網路2805的致動器2810的主動控制期間發生。相反地,訓練回饋可由教師模組2820收集,且在深度神經網路2805未主動指示致動器2810時,用於來強化訓練。舉例而言,在某些實施方式中,教師模組2820可遠離深度神經網路2805並且僅與深度神經網路2805進行間歇性的資料通訊。無論強化學習是間歇的還是連續的,可演化深度神經網路2805以例如使用從教師模組2820接收的資訊來優化獎勵及/或減少遺憾。 In many embodiments, reinforcement learning does not occur instantaneously in system 2800, or occurs during active control of actuators 2810 of deep neural network 2805. Conversely, training feedback may be collected by the teacher module 2820 and used to reinforce training when the deep neural network 2805 is not actively instructing the actuator 2810. For example, in some embodiments, the teacher module 2820 can be remote from the deep neural network 2805 and only intermittently communicate with the deep neural network 2805. Whether reinforcement learning is intermittent or continuous, the deep neural network 2805 can be evolved to optimize rewards and/or reduce regret, for example, using information received from the teacher module 2820.

在某些實施方式中,系統2800還包含額外資料的一或多個資料來源2825。源近似器1905還可在輸入層1915處從資料源2825接收資料。在這些情況下,將藉由處理感測器資料以及來自資料來源2825的資料來產生近似1200’。 In some embodiments, system 2800 also includes one or more data sources 2825 of additional data. Source approximator 1905 may also receive data from data source 2825 at input layer 1915 . In these cases, approximately 1200' will be generated by processing the sensor data as well as the data from data source 2825.

在某些實施方式中,由一強化學習系統2800所收集的資料可用於其他系統的訓練或強化學習,包含其他強化學習系統。舉例而言,測量的特徵化連同獎勵/遺憾值可由教師模組2820提供給資料交換系統,該資料交換系統從各種強化學習系統收集這些資料並在其中重新分配。除此之外,如上所述,測量的特徵化可以是在源神經網路中的活動模式中出現的拓撲結構的表示的近似,例如近似1200’。 In some embodiments, data collected by a reinforcement learning system 2800 can be used for training or reinforcement learning of other systems, including other reinforcement learning systems. For example, characterization of measurements along with reward/regret values may be provided by the teacher module 2820 to a data exchange system that collects and redistributes these data from various reinforcement learning systems. In addition to this, as mentioned above, the characterization of the measurements may be an approximation, e.g. approximation 1200', of a representation of the topology present in the activity pattern in the source neural network.

由強化學習系統2800所執行的特定運算將取決於特定的運算情境。舉例而言,在源近似器1905、深度神經網路2805、致動器2810以及感測器2815是汽車的一部分的情況下,深度神經網路2805可在操縱汽車時執行物件定位運算及/或物件偵測運算。 The specific operations performed by reinforcement learning system 2800 will depend on the specific operating context. For example, in the case where source approximator 1905, deep neural network 2805, actuator 2810, and sensor 2815 are part of a car, deep neural network 2805 can perform object location operations and/or while steering the car. Object detection operation.

在由強化學習系統2800所收集的資料用於其他系統的訓練或強化學習的實施方式中,當執行物件定位運算及/或物件偵測運算時特徵化環境狀態的獎勵/遺憾值以及近似1200’可提供給資料交換系統。然後,資料交換系統可將獎勵/遺憾值以及近似1200’分配給與其他車輛相關聯的其他強化學習系統2800,以便在那些其他車輛上進行強化學習。舉例而言,強化學習可用於使用獎勵/遺憾值以及近似1200’來改善第二車輛處的物件定位運算及/或物件偵測運算。 In embodiments where the data collected by the reinforcement learning system 2800 is used for training of other systems or reinforcement learning, the reward/regret values that characterize the state of the environment when performing object location operations and/or object detection operations and approximately 1200' Can be provided to the data exchange system. The data exchange system may then assign reward/regret values and approx. 1200' to other reinforcement learning systems 2800 associated with other vehicles in order to perform reinforcement learning on those other vehicles. For example, reinforcement learning can be used to improve object location operations and/or object detection operations at the second vehicle using reward/regret values and approximation 1200'.

然而,在其他車輛處學習的運算不需要與由深度神經網路2805執行的運算相同。舉例而言,基於旅行時間的獎勵/遺憾值以及由感測器資料的輸入的特徵化(例如:由全球定位系統(Global Positioning System,GPS)的資料來源2825所識別的位置中的意外地潮濕的道路)所造成的近似1200’可用於另一車輛的路線規劃操作。 However, the operations learned at other vehicles need not be the same operations performed by the deep neural network 2805. For example, reward/regret values based on travel time and characterization of inputs from sensor data (e.g., unexpected wetness in a location identified by Global Positioning System (GPS) data source 2825 The resulting approximate 1200' of road) can be used for another vehicle's route planning operation.

本揭露中敘述之標的以及運算的實施例可實作於數位電子電路中,或者實作於計算機的軟體、韌體或硬體中,包含本文所揭露的結構及其結構等同物,或者當中的一或多個的組合。本揭露中敘述之標的之實施例可實現為一或多個計算機程式,亦即由計算機程式指令組成的一或多個模組,該計算機程式指令被編碼於計算機儲存介質上,用於被資料處理裝置執行或用於控制資料處理裝置的運算。可選地或此外,程式指令可在人工生成的傳播訊號上編碼,例如,機器生成的電、光或電磁訊號,其被生成以編碼資訊以便傳輸到合適的接收器設備以供資料處理設備執行。計算機儲存介質可以是或可包含在計算機可讀儲存設備、計算機可讀儲存基板、隨機或串行存取儲存器陣列或設備、或前述物件的一或多個的組合當中。除此之 外,雖然計算機儲存介質不是一種傳播訊號,但是計算機儲存介質可以是被編碼於人工生成的傳播訊號編碼中的計算機程式指令的來源或目的地。計算機儲存介質也可以是或包含在一或多個單獨的實體元件或介質(例如,多個光碟、磁碟或其他儲存裝置)當中。 Embodiments of the subject matter and operations described in this disclosure may be implemented in digital electronic circuits, or in computer software, firmware, or hardware, including the structures disclosed herein and their structural equivalents, or any of them. A combination of one or more. Embodiments of the subject matter described in this disclosure may be implemented as one or more computer programs, that is, one or more modules consisting of computer program instructions encoded on a computer storage medium for use by data. The processing device performs or is used to control operations of the data processing device. Alternatively or in addition, the program instructions may be encoded on an artificially generated propagated signal, such as a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information for transmission to a suitable receiver device for execution by a data processing device. . Computer storage media may be or may be included in a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of the foregoing. Other than that Additionally, although a computer storage medium is not a propagated signal, a computer storage medium may be the source or destination of computer program instructions encoded in artificially generated encoding of a propagated signal. Computer storage media may also be or be contained in one or more separate physical components or media (eg, multiple optical disks, magnetic disks, or other storage devices).

本揭露中所敘述的運算可實作為由資料處理裝置對儲存在一或多個計算機可讀儲存裝置上或從其他來源接收的資料所執行的運算。 The operations described in this disclosure may be implemented as operations performed by a data processing device on data stored on one or more computer-readable storage devices or received from other sources.

用語「資料處理裝置」包含用於處理資料的所有類型的裝置、設備以及機器,包含例如可編程處理器、計算機、系統單晶片(System-on-a-Chip,SoC)、或者前述物件的多個或其組合。前述裝置可包含專用邏輯電路,例如現場可程式化邏輯閘陣列(field programmable gate array,FPGA)或專用積體電路(application-specific integrated circuit,ASIC)。除了硬體之外,該裝置還可包含可為所討論的計算機程式創建執行環境的程式碼,例如,構成處理器韌體的程式碼、協議堆疊(protocol stack)、資料庫管理系統、作業系統、跨平台執行時系統(cross-platform runtime environment)、虛擬機、或其中的一或多個的組合。裝置和執行環境可實現各種不同的計算模型基礎架構,例如網頁服務、分散式計算以及網格計算的基礎架構。 The term "data processing device" includes all types of devices, equipment, and machines for processing data, including, for example, programmable processors, computers, system-on-a-chip (SoC), or combinations of the foregoing. one or a combination thereof. The aforementioned device may include a dedicated logic circuit, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). In addition to hardware, the device may also contain code that creates an execution environment for the computer program in question, for example, code that constitutes the processor's firmware, a protocol stack, a database management system, an operating system , a cross-platform runtime environment, a virtual machine, or a combination of one or more thereof. Devices and execution environments can implement various computing model infrastructures, such as web serving, distributed computing, and grid computing infrastructures.

計算機程式(也稱為程式、軟體、軟體應用程式、腳本或程式碼)可用任何形式的程式語言編寫,包含編譯或直譯語言、宣告式或程序式語言,並且可部署於任何形式,包含作為獨立程式或作為模組、元件、子程式、物件或適用於計算環境的其他單元。計算機程式可以但不必對應於檔案系統中的檔案。程式可儲存在檔案的一部分中,該檔案保存其他程式或資料(例如:儲存在標記式語言檔案中的一或多個腳本)、儲存在專用於所討 論的程式的單個檔案中、或儲存在多個協調文件中(例如:儲存一或多個模組、子程式或部分程式碼的檔案)。可部署計算機程式以在一個計算機上執行,或在位於一站點上或分佈在多個站點上並且透過通訊網路互連的多個計算機上執行。 A computer program (also called a program, software, software application, script, or code) may be written in any form of programming language, including compiled or translated languages, declarative or procedural languages, and may be deployed in any form, including as a stand-alone Programs or as modules, components, subroutines, objects, or other units suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (for example, one or more scripts stored in a markup language file), in a A single file for a discussed program, or stored in multiple coordinated files (e.g., files that store one or more modules, subroutines, or portions of code). A computer program may be deployed to execute on one computer or on multiple computers located at a site or distributed across multiple sites and interconnected by a communications network.

本揭露中敘述的程序和邏輯流程可由執行一或多個計算機程式的一或多個可程式化處理器所執行,以透過對輸入資料進行運算並生成輸出來執行動作。程序以及邏輯流程也可由專用邏輯電路執行,並且裝置也可被實作為專用邏輯電路,例如現場可程式化邏輯閘陣列或專用積體電路。 The programs and logic flows described in this disclosure may be executed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. Programs and logic flows may also be executed by special purpose logic circuits, and devices may be implemented as special purpose logic circuits, such as field programmable logic gate arrays or special purpose integrated circuits.

舉例而言,適合於執行計算機程式的處理器包含通用及專用的微處理器,以及任何類型的數位計算機的任何一或多個處理器。一般而言,處理器將從唯讀記憶體或隨機存取儲存器或兩者接收指令和資料。計算機的基本元件是用於根據指令執行動作的處理器和用於儲存指令和資料的一或多個儲存器裝置。一般而言,計算機還將包含或可操作地耦合至用於儲存資料的一或多個大容量儲存裝置以與其收發資料,例如磁碟、磁光碟或光碟。然而,計算機並不需要具備這樣的裝置。除此之外,計算機可嵌入於另一個裝置中,例如行動電話、個人數位助理(Personal Digital Assistant,PDA)、行動音樂或影片播放器、遊戲主機、全球定位系統的接收器、或可攜式儲存裝置(例如:通用串列匯流排(Universal Serial Bus,USB)隨身碟)等裝置。適用於儲存計算機程式指令及資料的裝置包含所有形式的非揮發性儲存器、介質及儲存器裝置,包含例如半導體儲存器裝置(例如:抹除式可複寫唯讀記憶體(Erasable Programmable Read-Only Memory,EPROM)、電子 抹除式可複寫唯讀記憶體(Electrically-Erasable Programmable Read-Only Memory,EEPROM)以及快閃記憶體)、磁碟(例如:內部硬盤或可移動磁盤)、磁光碟以及唯讀記憶光碟(CD-ROM)和數位多功能影音光碟(DVD-ROM)。處理器以及儲存器可由專用邏輯電路補充或併入專用邏輯電路中。 By way of example, processors suitable for the execution of a computer program include general and special purpose microprocessors, and any one or more processors of any type of digital computer. Generally speaking, the processor will receive instructions and data from read-only memory or random access storage, or both. The basic elements of a computer are a processor for performing actions according to instructions and one or more memory devices for storing instructions and data. Typically, a computer will also include or be operably coupled to, and send and receive data to, one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks. However, the computer does not need to have such a device. In addition, the computer can be embedded in another device, such as a mobile phone, a Personal Digital Assistant (PDA), a mobile music or video player, a game console, a GPS receiver, or a portable Storage devices (such as Universal Serial Bus (USB) flash drives) and other devices. Devices suitable for storing computer program instructions and data include all forms of non-volatile storage, media and memory devices, including, for example, semiconductor memory devices (e.g., Erasable Programmable Read-Only Memory). Memory, EPROM), electronics Electrically-Erasable Programmable Read-Only Memory (EEPROM) and flash memory), magnetic disks (such as internal hard drives or removable disks), magneto-optical disks and CD-ROMs -ROM) and Digital Versatile Video Disc (DVD-ROM). The processor and memory may be supplemented by or incorporated into special purpose logic circuitry.

為了提供與使用者的互動,本揭露中敘述之標的之實施例可實作在具有顯示裝置(例如:陰極射線管(cathode ray tube,CRT)顯示器或液晶顯示器(liquid crystal display,LCD))的計算機上,以向使用者、鍵盤以及指示裝置(例如:滑鼠或軌跡球)顯示資訊,使用者可透過該等裝置向計算機提供輸入。其他類型的裝置也可用於提供與使用者的互動,舉例而言,提供使用者的回饋可以是任何形式的感覺回饋,例如視覺回饋、聽覺回饋或觸覺回饋,並且可以任何形式接收來自使用者的輸入,包含聲學、語音或觸覺輸入。除此之外,計算機可透過向使用者所使用的裝置發送檔案與從使用者所使用的裝置接收檔案來與使用者互動,例如透過響應於從網頁瀏覽器所接收的請求而將網頁發送到使用者的客戶端裝置上的網頁瀏覽器。 In order to provide interaction with users, embodiments of the subject matter described in the present disclosure may be implemented in a display device having a display device (such as a cathode ray tube (CRT) display or a liquid crystal display (LCD)). On a computer, information is displayed to the user, a keyboard, and a pointing device (such as a mouse or trackball) through which the user provides input to the computer. Other types of devices can also be used to provide interaction with the user. For example, the feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback, and the feedback received from the user can be in any form. Input, including acoustic, speech or tactile input. In addition, the computer can interact with the user by sending files to and receiving files from the device used by the user, such as by sending web pages to in response to requests received from a web browser. The web browser on the user's client device.

雖然本揭露包含許多具體實施細節,但這些實施細節不應被解釋為任何對發明或請求保護範圍的限制,而是作為針對特定發明的特定實施例的特徵的描述。本揭露中的各單獨實施例的上下文中所敘述的某些特徵也可在單一個實施例中組合實現。相對地,在單一個實施例的上下文中所敘述的各種特徵也可單獨地或以任何合適的子組合而在多個實施例中實現。除此之外,儘管上述特徵可被敘述為以某些組合的形式運作,且甚至本即以此形式請求保護,但是在某些情況下可從組合中切除來自所請求保護的組合的一或多個特徵,且因此所請求保護的組合可以是子組合或子組合 的變化。 Although this disclosure contains many specific implementation details, these implementation details should not be construed as any limitation on the scope of the invention or claims, but rather as descriptions of features of particular embodiments of particular inventions. Certain features that are described in the context of separate embodiments of this disclosure can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. In addition, although the features described above may be described as operating in certain combinations, and may even be claimed as such, one or more elements from the claimed combination may in some cases be excised from the combination. multiple features, and therefore the claimed combination may be a subcombination or a subcombination changes.

類似地,儘管在圖式中以特定的順序描繪了操作,但是不應被理解為必須以所示的特定順序或按順序執行該等操作,或者必須執行所示的所有操作才能實現理想的結果。在某些情況下,可採用多工處理以及平行處理。除此之外,上述實施例中的各種系統元件的各自分離不應被理解為在所有實施例中都需要如此,且應理解,所敘述的程式元件以及系統通常可整合於單一個軟體產品中或整合成多種軟體產品。 Similarly, although operations are depicted in the drawings in a specific order, it should not be understood that such operations must be performed in the specific order shown, or sequentially, or that all operations shown must be performed to achieve desirable results. . In some cases, multiplexing and parallel processing can be used. In addition, the respective separation of various system components in the above embodiments should not be understood as being required in all embodiments, and it should be understood that the described program components and systems can generally be integrated into a single software product Or integrated into multiple software products.

因此,已描述了標的之特定實施方式。其他實施方式也涵蓋在以下的申請專利範圍中。在一些情況下,權利要求中記載的動作可以以不同的順序執行並且仍然實現期望的結果。除此之外,圖式中描繪的程序不一定需要所示的特定順序或者依序才能實現理想的結果。在某些實施方式中,可採用多工處理以及平行處理。 Thus, specific implementations of the subject matter have been described. Other embodiments are also covered by the following patent applications. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Additionally, the procedures depicted in the drawings do not necessarily require the specific order shown, or sequential order, to achieve desirable results. In certain embodiments, multiplexing and parallel processing may be employed.

本揭露已敘述了諸多實施例。然而,應理解,可針對本揭露的諸多實施例進行各種修改。舉例而言,儘管表示1200為二元表示,其中每個位元單獨表示圖中特徵的存在與否,但其他種類的資訊表示也是可能的。例如,可使用多值且非二元數字的向量或矩陣來表示例如特徵的存在與否以及這些特徵的可能的其他特性。這種特性的一個例子是構成該特徵的活動的邊的權重。 This disclosure has described numerous embodiments. However, it is to be understood that various modifications may be made to the various embodiments of the present disclosure. For example, although representation 1200 is a binary representation in which each bit individually represents the presence or absence of a feature in the image, other kinds of information representations are possible. For example, a vector or matrix of multi-valued and non-binary numbers may be used to represent, for example, the presence or absence of features and possibly other characteristics of those features. An example of such a property is the weight of the active edges that make up the feature.

綜上所述,其他實施例也涵蓋在以下申請專利範圍的範圍內。 In summary, other embodiments are also within the scope of the following patent applications.

100‧‧‧遞迴人工神經網路裝置 100‧‧‧Recurrent Artificial Neural Network Device

101、102、103、104、105、106、107‧‧‧節點 101, 102, 103, 104, 105, 106, 107‧‧‧Nodes

110‧‧‧連結 110‧‧‧Link

Claims (20)

一種將一遞迴人工神經網路中之一訊號傳輸活動或顯示該訊號傳輸活動之一機率特徵化之方法,該方法由一資料處理裝置所執行,且該方法包含:將該遞迴人工神經網路中之該訊號傳輸活動或顯示該訊號傳輸活動之該機率特徵化,包含:識別該遞迴人工神經網路中之該訊號傳輸活動或該訊號傳輸活動之該機率之多個團模式(clique patterns),其中該等團模式包圍多個空腔(cavity);以及自該遞迴人工神經網路輸出由零和一所組成之一二元序列,其中該二元序列中之每一二元數字表示該遞迴人工神經網路中是否存在相應之模式,且其中該等二元數字代表處於不同維度的模式的存在與否且具有不同數目的節點;以及於一數位資料處理設備接收及處理由零和一所組成之該二元序列,包含:響應於該二元序列中的一第一二元數字的丟失,評估該第一二元數字相應之模式仍存在於該二元序列中的其他二元數字的一機率。 A method of characterizing a signal transmission activity in a recursive artificial neural network or a probability showing the signal transmission activity, the method is executed by a data processing device, and the method includes: converting the recursive artificial neural network Characterizing the signaling activity in the network or the probability showing the signaling activity includes: identifying multiple group patterns of the signaling activity or the probability of the signaling activity in the recurrent artificial neural network ( clique patterns, wherein the clique patterns surround a plurality of cavities; and output from the recurrent artificial neural network is a binary sequence consisting of zeros and ones, wherein each binary sequence in the binary sequence The binary numbers represent whether there is a corresponding pattern in the recursive artificial neural network, and the binary numbers represent the existence or absence of patterns in different dimensions and have different numbers of nodes; and when a digital data processing device receives and Processing the binary sequence consisting of zeros and ones, including: in response to the loss of a first binary digit in the binary sequence, evaluating that a pattern corresponding to the first binary digit still exists in the binary sequence a probability of other binary numbers. 如請求項1所述之方法,其中該方法還包含定義多個時間窗(window of time),且該遞迴人工神經網路中之該訊號傳輸活動或該訊號傳輸活動之該機率於該等時間窗期間響應於該遞迴 人工神經網路之一輸入,其中該等團模式於該等時間窗的各個中被識別。 The method of claim 1, wherein the method further includes defining multiple time windows (windows of time), and the signal transmission activity or the probability of the signal transmission activity in the recurrent artificial neural network is within the time window during which to respond to this recursion An input to an artificial neural network in which the clique patterns are identified in each of the time windows. 如請求項2所述之方法,其中該方法還包含基於發生於該等時間窗中之一第一時間窗期間之該等團模式之一可區分機率(distinguishable likelihood)識別該第一時間窗。 The method of claim 2, wherein the method further includes identifying the first time window based on a distinguishable likelihood of the clique patterns occurring during a first time window in the time windows. 如請求項1所述之方法,其中識別該等團模式是包含識別多個有向團(directed clique)。 The method of claim 1, wherein identifying the clique patterns includes identifying multiple directed cliques. 如請求項4所述之方法,其中識別該等有向團是包含捨棄或忽略存在於更高維度之有向團中之低維度之有向團。 The method of claim 4, wherein identifying the directed groups includes discarding or ignoring lower-dimensional directed groups existing in higher-dimensional directed groups. 如請求項1所述之方法,還包含:區分該等團模式為多個類別;以及根據該等團模式於各該類別之出現次數將該訊號傳輸活動或該訊號傳輸活動之該機率特徵化。 The method as described in claim 1, further comprising: distinguishing the group patterns into multiple categories; and characterizing the signaling activity or the probability of the signaling activity based on the number of occurrences of the group patterns in each category. . 如請求項6所述之方法,其中區分該等團模式是包含根據各該團模式中之一點數量(a number of points)來區分該等團模式。 The method of claim 6, wherein distinguishing the group modes includes distinguishing the group modes based on a number of points in each group mode. 如請求項1所述之方法,其中該二元序列中之每一二元數字表示該遞迴人工神經網路中是否存在相應之模式,無論相應之模式在遞迴人工神經網路中之一位置為何。 The method of claim 1, wherein each binary number in the binary sequence indicates whether a corresponding pattern exists in the recurrent artificial neural network, regardless of whether the corresponding pattern is in one of the recurrent artificial neural networks. What is the location. 如請求項1所述之方法,還包含:建構該遞迴人工神經網路,包含:讀取該遞迴人工神經網路所輸出之該等二元數字;以及 演化(evolving)該遞迴人工神經網路之一結構,其中演化該遞迴人工神經網路之該結構包含:迭代地更改該結構;將更改之該結構中之模式之複雜度特徵化;以及使用該模式之該複雜度之該特徵化來作為更改之該結構是否理想之一指標。 The method described in claim 1 further includes: constructing the recursive artificial neural network, including: reading the binary numbers output by the recursive artificial neural network; and Evolving a structure of the recursive artificial neural network, wherein evolving the structure of the recursive artificial neural network includes: iteratively changing the structure; characterizing the complexity of patterns in the changed structure; and Use the characterization of the complexity of the pattern as an indicator of whether the structure of the change is desirable. 如請求項1所述之方法,還包含:基於確認在該遞迴人工神經網路中之模式之複雜度來識別在該遞迴人工神經網路中之多個決策時刻(decision moment),其中,決策時刻是指當在該遞迴人工神經網路中之該訊號傳輸活動或該訊號傳輸活動之該機率指示該遞迴人工神經網路響應於輸入的資訊處理結果的時間點,且識別該等決策時刻包含:確認該訊號傳輸活動或該訊號傳輸活動之該機率之一時間點,該訊號傳輸活動或該訊號傳輸活動之該機率相較於響應於該輸入之其他訊號傳輸活動或該訊號傳輸活動之該機率具有可區分之一複雜度;以及基於具有可區分之該複雜度之該訊號傳輸活動或該訊號傳輸活動之該機率之該時間點來識別該等決策時刻。 The method of claim 1, further comprising: identifying a plurality of decision moments in the recursive artificial neural network based on the complexity of the pattern identified in the recursive artificial neural network, wherein , the decision moment refers to the time point when the signal transmission activity or the probability of the signal transmission activity in the recurrent artificial neural network indicates that the recurrent artificial neural network responds to the input information processing result, and the identification of the Such decision times include: a point in time when the signaling activity or the probability of the signaling activity is confirmed, and the signaling activity or the probability of the signaling activity is compared with other signaling activities or the signaling in response to the input The probability of the signaling activity has a distinguishable complexity; and the decision moments are identified based on the point in time of the signaling activity or the probability of the signaling activity with the distinguishable complexity. 如請求項10所述之方法,還包含將一資料流(data stream)輸入至該遞迴人工神經網路;以及於輸入該資料流時識別該等團模式。 The method of claim 10, further comprising inputting a data stream into the recurrent artificial neural network; and identifying the clique patterns when inputting the data stream. 如請求項1所述之方法,還包含評估該訊號傳輸活動或該訊號傳輸活動之該機率是否響應於該遞迴人工神經網路之一輸入,且評估該活動是否響應於該遞迴人工神經網路之該輸入包含:評估在該輸入事件後相對較早且相對較簡單之模式響應於該輸入,而在該輸入事件後相對較早且相對較複雜之模式不響應於該輸入;以及評估在該輸入事件後相對較晚且相對較複雜之模式響應於該輸入,而在該輸入事件後相對較早且相對較複雜之模式不響應於該輸入。 The method of claim 1, further comprising evaluating whether the signaling activity or the probability of the signaling activity is responsive to an input of the recurrent artificial neural network, and evaluating whether the activity is responsive to the recurrent artificial neural network The input to the network includes: evaluating that relatively early and relatively simple patterns after the input event are responsive to the input and that relatively early and relatively complex patterns after the input event are not responsive to the input; and evaluating Patterns that are relatively late and relatively complex after the input event respond to the input, while patterns that are relatively early and complex after the input event do not respond to the input. 一種包含一或多個計算機之系統,該一或多個計算機執行以下運算:將一遞迴人工神經網路中之一訊號傳輸活動或顯示該訊號傳輸活動之一機率特徵化,包含:識別該遞迴人工神經網路中之該訊號傳輸活動或該訊號傳輸活動之該機率之多個團模式(clique patterns),其中該等團模式包圍多個空腔(cavity);以及自該遞迴人工神經網路輸出由零和一所組成之一二元序列,其中該二元序列中之每一二元數字表示該遞迴人工神經網路中是否存在一相應模式,且其中該等二元數字代表處於不同維度的模式的存在與否且具有不同數目的節點;以及於一數位資料處理設備接收及處理由零和一所組成之該二元序列,包含: 響應於該二元序列中的一第一二元數字的丟失,評估該第一二元數字相應之模式仍存在於該二元序列中的其他二元數字的一機率。 A system that includes one or more computers that perform operations that characterize a signaling activity in a recursive artificial neural network or a probability that exhibits that signaling activity, including: identifying the signaling activity clique patterns of the signaling activity or the probability of the signaling activity in a recursive artificial neural network, wherein the clique patterns surround cavities; and from the recursive artificial neural network The neural network outputs a binary sequence composed of zeros and ones, where each binary number in the binary sequence represents whether there is a corresponding pattern in the recursive artificial neural network, and where the binary numbers Representing the existence or absence of patterns in different dimensions and having different numbers of nodes; and receiving and processing the binary sequence composed of zeros and ones in a digital data processing device, including: In response to the loss of a first binary digit in the binary sequence, a probability that a pattern corresponding to the first binary digit still exists in other binary digits in the binary sequence is evaluated. 如請求項13所述之系統,其中該等運算還包含定義多個時間窗(window of time),該遞迴人工神經網路中之該訊號傳輸或訊號傳輸活動之該機率於該等時間窗期間響應於該遞迴人工神經網路之一輸入,其中該等團模式於該等時間窗中被識別。 The system of request 13, wherein the operations further include defining a plurality of time windows (windows of time), and the probability of the signal transmission or signal transmission activity in the recurrent artificial neural network is within these time windows. A period in which the clique patterns are identified in the time windows in response to an input to the recurrent artificial neural network. 如請求項14所述之系統,其中該等運算還包含基於活動之該等團模式之一可區分機率(distinguishable likelihood)識別該等時間窗中之一第一時間窗,且該等團模式是發生於該第一時間窗。 The system of claim 14, wherein the operations further include identifying a first time window of the time windows based on a distinguishable likelihood of the clique patterns of activity, and the clique patterns are Occurs within this first time window. 如請求項14所述之系統,其中識別該等團模式是包含捨棄或忽略存在於更高維度之有向團中之低維度之有向團。 The system of claim 14, wherein identifying the clique patterns includes discarding or ignoring lower-dimensional directed cliques that exist within higher-dimensional directed cliques. 如請求項13所述之系統,其中該等運算還包含:建構該遞迴人工神經網路,包含:讀取該遞迴人工神經網路所輸出之該等二元數字;以及演化(evolving)該遞迴人工神經網路之一結構,其中演化該遞迴人工神經網路之該結構包含:迭代地更改該結構;將該結構中之該等模式之複雜度特徵化將針對該模式之複雜度之該特徵化用以指示所更改之該結構是否理想。 The system of claim 13, wherein the operations further include: constructing the recursive artificial neural network, including: reading the binary numbers output by the recursive artificial neural network; and evolving (evolving). A structure of the recursive artificial neural network, wherein evolving the structure of the recursive artificial neural network includes: iteratively changing the structure; characterizing the complexity of the patterns in the structure will target the complexity of the pattern This characterization of the degree is used to indicate whether the modified structure is ideal. 如請求項13所述之系統,其中該等運算還包含:基於確認該遞迴人工神經網路中之模式之複雜度來識別該遞迴人工神經網路中之多個決策時刻(decision moment),其中一決策時刻是當在遞迴人工神經網路中之該訊號傳輸活動或該訊號傳輸活動之該機率指示網路響應於輸入的資訊處理結果的時間點,且識別該等決策時刻包含:確認該訊號傳輸活動或該訊號傳輸活動之該機率之一時間點,該訊號傳輸活動或該訊號傳輸活動之該機率具有相較於其他響應於輸入之活動為可區分之一複雜度;以及基於具有可區分之該複雜度之該訊號傳輸活動或該訊號傳輸活動之該機率之該時間點來識別該等決策時刻。 The system of claim 13, wherein the operations further include: identifying multiple decision moments in the recursive artificial neural network based on the complexity of identifying patterns in the recursive artificial neural network. , one of the decision moments is the time point when the signaling activity or the probability of the signaling activity in the recurrent artificial neural network indicates that the network responds to the input information processing results, and identifying such decision moments includes: Identifying a point in time in which the signaling activity or the probability of the signaling activity has a complexity that is distinguishable compared to other activities in response to the input; and based on The decision moments are identified at the point in time of the signaling activity of the distinguishable complexity or the probability of the signaling activity. 如請求項18所述之系統,其中該等運算還包含將一資料流(data stream)輸入至該遞迴人工神經網路;以及於輸入該資料流時識別該等團模式。 The system of claim 18, wherein the operations further include inputting a data stream to the recurrent artificial neural network; and identifying the clique patterns when inputting the data stream. 如請求項13所述之系統,其中該等運算還包含評估該活動是否響應於該遞迴人工神經網路之一輸入,且評估該訊號傳輸活動或該訊號傳輸活動之該機率是否響應於該遞迴人工神經網路之該輸入包含:評估在該輸入之時刻後相對較早且相對較簡單之模式響應於該輸入,而在該輸入之時刻後相對較早且相對較複雜之模式不響應於該輸入;以及評估在該輸入之時刻後相對較晚且相對較複雜之模式響應 於該輸入,而在該輸入之時刻後相對較早且相對較複雜之模式不響應於該輸入。 The system of claim 13, wherein the operations further include evaluating whether the activity is responsive to one of the inputs of the recurrent artificial neural network, and evaluating whether the signaling activity or the probability of the signaling activity is responsive to the The input to the recurrent artificial neural network includes: evaluating that relatively early and relatively simple patterns after the time of the input respond to the input, and that relatively early and relatively complex patterns after the time of the input do not respond. to that input; and to evaluate relatively late and relatively complex pattern responses after the moment of that input to the input, and the relatively early and relatively complex pattern after the moment of the input does not respond to the input.
TW108119813A 2018-06-11 2019-06-06 Method of characterizing activity in an artificial nerual network, and system comprising one or more computers operable to perform said method TWI822792B (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US16/004,837 2018-06-11
US16/004,635 US20190378007A1 (en) 2018-06-11 2018-06-11 Characterizing activity in a recurrent artificial neural network
US16/004,757 2018-06-11
US16/004,796 2018-06-11
US16/004,635 2018-06-11
US16/004,671 US11972343B2 (en) 2018-06-11 Encoding and decoding information
US16/004,671 2018-06-11
US16/004,796 US20190378000A1 (en) 2018-06-11 2018-06-11 Characterizing activity in a recurrent artificial neural network
US16/004,837 US11663478B2 (en) 2018-06-11 2018-06-11 Characterizing activity in a recurrent artificial neural network
US16/004,757 US11893471B2 (en) 2018-06-11 2018-06-11 Encoding and decoding information and artificial neural networks

Publications (2)

Publication Number Publication Date
TW202001693A TW202001693A (en) 2020-01-01
TWI822792B true TWI822792B (en) 2023-11-21

Family

ID=66776339

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108119813A TWI822792B (en) 2018-06-11 2019-06-06 Method of characterizing activity in an artificial nerual network, and system comprising one or more computers operable to perform said method

Country Status (5)

Country Link
EP (5) EP3803699A1 (en)
KR (5) KR102497238B1 (en)
CN (5) CN112567387A (en)
TW (1) TWI822792B (en)
WO (5) WO2019238483A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615285B2 (en) 2017-01-06 2023-03-28 Ecole Polytechnique Federale De Lausanne (Epfl) Generating and identifying functional subnetworks within structural networks
US11893471B2 (en) 2018-06-11 2024-02-06 Inait Sa Encoding and decoding information and artificial neural networks
US11663478B2 (en) 2018-06-11 2023-05-30 Inait Sa Characterizing activity in a recurrent artificial neural network
US11569978B2 (en) 2019-03-18 2023-01-31 Inait Sa Encrypting and decrypting information
US11652603B2 (en) 2019-03-18 2023-05-16 Inait Sa Homomorphic encryption
US11610134B2 (en) * 2019-07-08 2023-03-21 Vianai Systems, Inc. Techniques for defining and executing program code specifying neural network architectures
US11816553B2 (en) 2019-12-11 2023-11-14 Inait Sa Output from a recurrent neural network
US11651210B2 (en) 2019-12-11 2023-05-16 Inait Sa Interpreting and improving the processing results of recurrent neural networks
US11580401B2 (en) 2019-12-11 2023-02-14 Inait Sa Distance metrics and clustering in recurrent neural networks
US11797827B2 (en) 2019-12-11 2023-10-24 Inait Sa Input into a neural network
TWI769466B (en) * 2020-06-17 2022-07-01 台達電子工業股份有限公司 Neural network system and method of operating the same
CN112073217B (en) * 2020-08-07 2023-03-24 之江实验室 Multi-network structure difference vectorization method and device
CN113219358A (en) * 2021-04-29 2021-08-06 东软睿驰汽车技术(沈阳)有限公司 Battery pack health state calculation method and system and electronic equipment
TWI769875B (en) * 2021-06-24 2022-07-01 國立中央大學 Deep learning network device, memory access method and non-volatile storage medium used therefor
CN113626721B (en) * 2021-10-12 2022-01-25 中国科学院自动化研究所 Regrettful exploration-based recommendation method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5844286B2 (en) * 2010-02-05 2016-01-13 エコール・ポリテクニーク・フェデラル・ドゥ・ローザンヌ (ウ・ペ・エフ・エル)Ecole Polytechnique Federalede Lausanne (Epfl) Organizing neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
期刊 MICHAEL W. REIMANN ET AL,Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function,FRONTIERS IN COMPUTATIONAL NEUROSIENCE vol. 11,12 June 2017

Also Published As

Publication number Publication date
WO2019238523A1 (en) 2019-12-19
CN112585621A (en) 2021-03-30
EP3803708A1 (en) 2021-04-14
CN112567388A (en) 2021-03-26
EP3803707A1 (en) 2021-04-14
TW202001693A (en) 2020-01-01
CN112567390A (en) 2021-03-26
KR20210008418A (en) 2021-01-21
WO2019238483A1 (en) 2019-12-19
KR20210008858A (en) 2021-01-25
KR102488042B1 (en) 2023-01-12
EP3803706A1 (en) 2021-04-14
KR102497238B1 (en) 2023-02-07
KR20210010894A (en) 2021-01-28
KR20210008417A (en) 2021-01-21
WO2019238513A1 (en) 2019-12-19
WO2019238512A1 (en) 2019-12-19
KR102526132B1 (en) 2023-04-26
CN112567389A (en) 2021-03-26
KR102465409B1 (en) 2022-11-09
WO2019238522A1 (en) 2019-12-19
KR102475411B1 (en) 2022-12-07
KR20210008419A (en) 2021-01-21
EP3803705A1 (en) 2021-04-14
EP3803699A1 (en) 2021-04-14
CN112567387A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
TWI822792B (en) Method of characterizing activity in an artificial nerual network, and system comprising one or more computers operable to perform said method
US11663478B2 (en) Characterizing activity in a recurrent artificial neural network
US11893471B2 (en) Encoding and decoding information and artificial neural networks
US20190378000A1 (en) Characterizing activity in a recurrent artificial neural network
US20190378007A1 (en) Characterizing activity in a recurrent artificial neural network
US11853891B2 (en) System and method with federated learning model for medical research applications
McDonnell et al. An introductory review of information theory in the context of computational neuroscience
CN115053231A (en) Input into neural networks
Malik et al. Architecture, generative model, and deep reinforcement learning for IoT applications: Deep learning perspective
US11654366B2 (en) Computer program for performing drawing-based security authentication
US11972343B2 (en) Encoding and decoding information
US20190378008A1 (en) Encoding and decoding information
Jeong Performance of Neural Computing Techniques in Communication Networks
Jafarigol Uncovering the Potential of Federated Learning: Addressing Algorithmic and Data-driven Challenges under Privacy Restrictions
Alhalabi Ensembles of Pruned Deep Neural Networks for Accurate and Privacy Preservation in IoT Applications
CN117195058A (en) Brain space-time data analysis method and system based on neural network
CN117112951A (en) User account pushing method and device, electronic equipment and storage medium
Polepalli Scalable Digital Architecture of a Liquid State Machine
CN116935143A (en) DFU medical image classification method and system based on personalized federal learning