WO2010144947A1 - Construction and training of a recurrent neural network - Google Patents

Construction and training of a recurrent neural network Download PDF

Info

Publication number
WO2010144947A1
WO2010144947A1 PCT/AU2010/000720 AU2010000720W WO2010144947A1 WO 2010144947 A1 WO2010144947 A1 WO 2010144947A1 AU 2010000720 W AU2010000720 W AU 2010000720W WO 2010144947 A1 WO2010144947 A1 WO 2010144947A1
Authority
WO
WIPO (PCT)
Prior art keywords
local
recurrent neural
neural network
network
node
Prior art date
Application number
PCT/AU2010/000720
Other languages
French (fr)
Inventor
Oliver Obst
Original Assignee
Commonwealth Scientific And Industrial Research Organisation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to AU2009902733A priority Critical patent/AU2009902733A0/en
Priority to AU2009902733 priority
Application filed by Commonwealth Scientific And Industrial Research Organisation filed Critical Commonwealth Scientific And Industrial Research Organisation
Publication of WO2010144947A1 publication Critical patent/WO2010144947A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/0445Feedback networks, e.g. hopfield nets, associative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods

Abstract

A method for constructing and training a discrete-time recurrent neural network for predicting network inputs is provided. A main recurrent neural network is constructed, formed from a plurality of nodes. Each node hosts a local recurrent neural network formed of a plurality of connected units. The units are connected by weighted connections. A local shadow recurrent neural network is constructed on each node. The local shadow recurrent neural network is a copy of the local recurrent neural network on the respective node, however with certain restrictions on its connection with other nodes. The main recurrent neural network is trained to determine the weights of each connection on each node to provide a local output on each node correlating to a prediction of the local input on the respective node. The training includes, for each discrete time step and on each node: feeding a local input to the local recurrent neural network to cause local network activations; feeding a training input to the local shadow recurrent neural network and applying learning rules to determine connection weights on the local shadow recurrent neural network. The determined connection weights from the local shadow network are copied to the local network.

Description

CONSTRUCTION AND TRAINING OF A RECURRENT NEURAL NETWORK

FIELD OF THE INVENTION

The present invention relates to recurrent neural networks. BACKGROUND TO THE INVENTION

The origin of the present invention stems from research undertaken by the present inventor into the field of wireless sensor networks.

Wireless sensor networks (WSN) are increasingly used for environmental monitoring over extended periods of time. To facilitate deployments in remote areas, sensor nodes are typically small, solar-powered devices with limited computational capabilities. Over the duration of the deployment, harsh weather conditions can lead to problems like mis-calibration or build-up of dust on sensors and solar panels, leading to incorrect readings or shorter duty-cycles and thus less data. Existing WSN often require to manually detect and diagnose such problems.

The inventor researched processes and methods in which the detection of faults could be automated through the use of models of system behaviour. Hence, an initial proposal for investigation was to determine a process for automatically building a model of the normal system behaviour and to use this model to detect anomalies. With the result of this process, it would be possible to notify administrators who then can decide on appropriate actions preventing loss of data.

In respect of building a model of system behaviour, investigations turned to the field of artificial neural networks (ANNs). In particular, given the intended application to a dynamical system, consideration was given to recurrent neural networks (RNNs).

International PCT publication no. WO02/31764, discloses a method of constructing and teaching an RNN. The disclosure of WO02/31764 is herein incorporated by way of reference. This publication provides a relevant background discussion of the development and practical issues relevant to this field.

While the use of an RNN theoretically suggested a suitable option for the purposes of the inventor's research, the practical application proved difficult. The success of an RNN depends upon a reliable teaching/learning phase in which the network self-learns, which requires high computational capabilities. Known RNNs require a large memory footprint in the learning phase. In practice, these capabilities are beyond the capabilities found on the type of WSN contemplated. An object of the present invention is to provide an alternative construction and training of an RNN which could be adopted for the purpose of fault detection of a WSN. SUMMARY OF THE INVENTION

According to a first aspect of the present invention there is provided a method for constructing and training a discrete-time recurrent neural network for predicting network inputs, the method including the steps of: i) constructing a main recurrent neural network formed from a plurality of nodes, wherein each node hosts a local recurrent neural network formed of a plurality of connected units, the connected units including one or more input units, one or more hidden units and one or more output units; each node further including at least one proxy unit, the at least one proxy unit providing a connection between the local recurrent network on the respective node and one or more proxy units on other nodes in the main network; wherein the units are connected by weighted connections; ii) constructing a local shadow recurrent neural network on each node, the local shadow recurrent neural network being a copy of the local recurrent neural network on the respective node; wherein the local shadow recurrent neural network is arranged to receive and accept activations from local recurrent neural networks on other nodes via the proxy units and prevented from providing any activations to local recurrent neural networks or shadow local recurrent neural networks on other nodes; iii) training the main recurrent neural network to determine the weights of each connection on each node to provide a local output on each node correlating to a prediction of the local input on the respective node, the training including the steps of: a) for each discrete time step and on each node: feeding a local input to the local recurrent neural network to cause local network activations; feeding a training input to the local shadow recurrent neural network to cause shadow network activations and applying learning rules to determine connection weights on the local shadow recurrent neural network leading to a shadow network output which correlates with a prediction of the local input; wherein the training input is unrelated to the local input; copying the determined connection weights from the local shadow network to the local network; b) repeating step a) until the main network is trained. According to a further aspect of the present invention there is provided a method for training a discrete-time recurrent neural network for predicting network inputs, said recurrent neural network having a construction formed from a plurality of nodes, wherein each node hosts a local recurrent neural network formed of a plurality of connected units, said connected units including one or more input units, one or more hidden units and one or more output units; each node further including at least one proxy unit, said at least one proxy unit providing a connection between the local recurrent network on the respective node and one or more proxy units on other nodes in the main network; wherein said units are connected by weighted connections, said method including the steps of: i) constructing a local shadow recurrent neural network on each node, said local shadow recurrent neural network being a copy of the local recurrent neural network on the respective node; wherein said local shadow recurrent neural network is arranged to receive and accept activations from local recurrent neural networks on other nodes via said proxy units and prevented from providing any activations to local recurrent neural networks or shadow local recurrent neural networks on other nodes; ii) training said main recurrent neural network to determine the weights of each connection on each node to provide a local output on each node correlating to a prediction of the local input on the respective node, said training including the steps of: a) for each discrete time step and on each node: feeding a local input to the local recurrent neural network to cause local network activations; feeding a training input to the local shadow recurrent neural network to cause shadow network activations and applying learning rules to determine connection weights on said local shadow recurrent neural network leading to a shadow network output which correlates with a prediction of the local input; wherein said training input is unrelated to said local input; copying the determined connection weights from said local shadow network to said local network; b) repeating step a) until said main network is trained. Embodiments of the present invention advantageously provide a recurrent neural network which can be practically applied to a device network, for example a wireless sensor network, to predict the network behaviour and detect faults based upon actual and predicted parameters.

An embodiment of the present invention will now be described with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 illustrates schematically a sensor node network; Fig. 2 illustrates schematically a local recurrent neural network on a node; and Fig. 3 illustrates schematically an arrangement of a local recurrent neural network and a local shadow recurrent neural network during training. DESCRIPTION OF PREFERRED EMBODIMENT

The invention will be described with reference to an environmental data collection sensor network 10 (as illustrated exemplarily and schematically in Fig. 1 ). It will be understood that the system architecture and learning capabilities can be implemented in other fields, particularly in sensor networks which sense low entropy data, for example temperature and moisture.

Specifically, the inventive concept is used, in this example, to allow sensor fault detection. The architecture of the system is devised such as to enable it to learn spatio-temporal correlations of the device network (e.g. a WSN) and make use of them for detecting anomalies in a decentralized way, without using global communication. In this approach, a plurality of sensor nodes q\, where i=1 , 2, 3... participate in a distributed recurrent neural network 10, where each of the sensor nodes qi hosts only a few neural units and communicates only with its local neighbours as indicated by connections Wj1 ^1 ; Wj, J+1; Wy. The proposed online learning approach is a variant of backpropagation-decorrelation (BPDC) learning with intrinsic plasticity (IP). Whilst an alternative approach would be to employ a distributed fault detection based on echo state learning, such an offline learning approach is computationally too demanding to be directly executed on sensor nodes. Consequently, the preferred approach discussed below is suited for directly learning on sensor nodes because of a smaller memory footprint than echo state learning during training. WSNs spend a large part of their energy on communication between individual nodes qi. Routing data between distant nodes involves participation of intermediate nodes ("multi-hop"), and thus further increases energy consumption.

To distribute a recurrent neural network over a WSN, each node hosts some units of an entire neural network. Connections Wy between units are restricted to those hosted on the same node or on nodes in the immediate spatial neighbourhood, as shown eg in Fig.1. This results, on each device, in small local reservoirs with local input units Kq and output units Lq with additional connections between neighbours (see Fig. 2). From a global perspective, a spatially organised reservoir is obtained, which is trained using a distributed version of BPDC learning, which is being called Spacially Organised and Distributed Backpropagation-Decorrelation (SODBPDC).

Each sensor node qi hosts the same number Mq of units, namely Lq output units, Nq hidden units, and Kq input units. The whole recurrent network consists of M units, i.e. L output units, N hidden units, and K input units. For first theoretical considerations, it is convenient to represent activations as a global vector x:

output units 1 output units Q hidden units 1 input units Q

Likewise, synaptic connection weights between units can be represented in a global M x M matrix W = (ωy). Activations are updated in a distributed way as in the non-distributed version of BPDC. x(k + l)

Figure imgf000007_0001

for each time step k, so that each node computes all local Xj(k+1 ).

In a practical implementation, both W and x are distributed over multiple sensor nodes. Moreover, there are connections in W between units on different devices, which require to have a specified physical location, as exemplified in Fig.

1. Incoming connections from units hosted on neighbour sensor nodes are stored on the local node. Units with outgoing connections to units on other devices just forward their activations with no changes to the neighbour device. Additional proxy units on the neighbour act as a place holder for remote units and take activations from connected units (see Fig. 2). From proxy units, there are only local connections to the reservoir or to output units. Proxy units also eliminate the need for all sensor nodes being tightly synchronised, as long as they all use the same interval to process data - typical update frequencies are very low, e.g. once every minute or once every 15 minutes. Newly computed activations are forwarded to connected proxy units where they can be used by the neighbour device. After their values have been used, proxy units are reset to 0 in order to avoid using old values in case of a sensor network link failure.

Each sensor node is responsible for updating its local output weights. Let xq denote the vector of activations in x which can be accessed locally on node q either directly or by reading out a proxy unit. Let the set O contain indices j of output units, O c O the local output units, and g : O -→ό a mapping from global to local unit indices. The SODBPDC learning rule is executed on each local sensor node and updates the global matrix W:

Figure imgf000007_0002

where

r,(k + \) = ∑ (wj'(xl(s) (k)))egis) (k) - eg{l) (k + 1)

*=o A learning rate η = 0.03, and a regularisation constant e = 0.002 was used in the following described experiments; and eg(S)(k) = xq g(S)(k) - yq g(S)(k) represent errors between local outputs xq and local teaching signal yq.

In the application to detect sensor faults, the task is to predict local sensor readings based on information from other nodes. It is expected that a reading and its prediction to be approximately equal when the sensor works normally. Faults are detected when the difference between the two exceeds a specified threshold. During the initial training period, it is assumed that there are no sensor faults, so that the training output for each sensor is exactly the same as the input time series.

Because this approach detects faults based on differences between predictions and local readings, it is important that predictions are independent from local sensors. This is achieved by replacing input of the particular sensor by white noise. The sensor reading is used as a teacher signal, and the goal of the training is to learn the relation between the local sensor value and the value of its neighbours. A further aim is to learn on all sensor nodes simultaneously - this is not possible if it is required to have to feed random input into all inputs at the same time.

To nevertheless train all outputs in parallel, in accordance with the invention, an identical copy of the local recurrent network is created on each node. The original instance, the primary network, is connected to the local networks on neighbour nodes as described above, and receives normal input from its local sensors. The global network joining all local primary instances with activations xq has an activation x. In x, all input units are sensor readings at all times. The second instance, the shadow network, has only incoming connections from primary networks on neighbour nodes, but does not forward its local activations xq to any other node. Local input units of the shadow network are fed with random noise. This results in an individual global activation xq for each node q. In each xq, there is no correlation between local input and the local training signal. The primary network is responsible to feed its activations to neighbour nodes. Both the SODBPDC rule and IP are applied to the shadow network. After application of learning rules, a consolidation step copies output weights and local

IP learning parameters from the shadow network back to the primary network. This is schematically illustrated in Fig. 3.

Once the training of the primary network is completed, the shadow network becomes effectively redundant. Consequently, the shadow network can be deleted.

Sensor faults are detected when the difference between the prediction of a reading and the actual reading exceeds a threshold. In practice, detecting faulty sensors does not necessarily imply that the device will be replaced or repaired immediately. When the system continues to run with input from faulty sensors, the prediction quality of other nodes will decay. In order to decrease their effect on the system, faulty devices are flagged, and their sensor input is disconnected from the SODBPDC. The sensor input is then replaced with the local predictions of the sensor readings as computed by the SODBPDC. As noted in the following experiments, this helps to maintain a high prediction quality for the remaining sensors with a larger number of faults in the system.

Experimental results

The following training data are time series from a sensor network, experimentally implementing the inventive concept, deployed in Belmont, near Brisbane, Australia with 32 sensor nodes q, as per Fig. 1. Because the data was collected by forwarding to a central node, it contained "holes" as a result of duty cycling. Smaller gaps were resampled by interpolation, and the larger and network-wide gaps were left in the data. The purpose of the experiment was to monitor the condition of solar panels by measuring the solar voltage on each device. In all the experiments, the SODBPDC network consisted of 32 output units, 160 hidden units, and 32 input units (ie. 1/5/1 units formed as a local network on each node). Example 1 - comparison to a centralised approach using BPDC and IP learning. The prediction errors of the approach was compared against an approach using a centralised reservoir of the same size. For both approaches, link qualities were simulated from 10% to 100%. In the distributed approach, this represents the probability of communication between any two nodes, and in a centralised setting the communication probability from any node to central reservoir. The centralised reservoir was used to predict the solar voltage of one node, taking the solar voltage of the remaining 31 nodes as an input. It was found that SODBPDC learned slower in the beginning, very likely an effect of the additional random noise signal. For high link qualities, SODBPDC performed equally or better than BPDC, but BPDC managed to maintain lower NRMSE even for poor link qualities.

Example 2 - robustness against multiple failures.

In this experiment an increasing number of sensors were randomly selected to fail, and the faulty sensors were allowed to return 25% of the original value. Of interest was the prediction error for the remaining healthy nodes, when

(a) the faulty sensors continued to feed data into the network, and (b) the input from faulty sensors was replaced with (recursive) predictions. All nodes were trained using the first 43000 values of the time series. It was found that using the recursive predictions manages to keep the error far below the error of the approach using faulty sensor data for up to 29 failed sensors.

Claims

CLAIMS:
1. A method for constructing and training a discrete-time recurrent neural network for predicting network inputs, said method including the steps of: i) constructing a main recurrent neural network formed from a plurality of nodes, wherein each node hosts a local recurrent neural network formed of a plurality of connected units, said connected units including one or more input units, one or more hidden units and one or more output units; each node further including at least one proxy unit, said at least one proxy unit providing a connection between the local recurrent network on the respective node and one or more proxy units on other nodes in the main network; wherein said units are connected by weighted connections; ii) constructing a local shadow recurrent neural network on each node, said local shadow recurrent neural network being a copy of the local recurrent neural network on the respective node; wherein said local shadow recurrent neural network is arranged to receive and accept activations from local recurrent neural networks on other nodes via said proxy units and prevented from providing any activations to local recurrent neural networks or shadow local recurrent neural networks on other nodes; iii) training said main recurrent neural network to determine the weights of each connection on each node to provide a local output on each node correlating to a prediction of the local input on the respective node, said training including the steps of: a) for each discrete time step and on each node: feeding a local input to the local recurrent neural network to cause local network activations; feeding a training input to the local shadow recurrent neural network to cause shadow network activations and applying learning rules to determine connection weights on said local shadow recurrent neural network leading to a shadow network output which correlates with a prediction of the local input; wherein said training input is unrelated to said local input; copying the determined connection weights from said local shadow network to said local network; b) repeating step a) until said main network is trained.
2. A method for training a discrete-time recurrent neural network for predicting network inputs, said recurrent neural network having a construction formed from a plurality of nodes, wherein each node hosts a local recurrent neural network formed of a plurality of connected units, said connected units including one or more input units, one or more hidden units and one or more output units; each node further including at least one proxy unit, said at least one proxy unit providing a connection between the local recurrent network on the respective node and one or more proxy units on other nodes in the main network; wherein said units are connected by weighted connections, said method including the steps of: i) constructing a local shadow recurrent neural network on each node, said local shadow recurrent neural network being a copy of the local recurrent neural network on the respective node; wherein said local shadow recurrent neural network is arranged to receive and accept activations from local recurrent neural networks on other nodes via said proxy units and prevented from providing any activations to local recurrent neural networks or shadow local recurrent neural networks on other nodes; ii) training said main recurrent neural network to determine the weights of each connection on each node to provide a local output on each node correlating to a prediction of the local input on the respective node, said training including the steps of: a) for each discrete time step and on each node: feeding a local input to the local recurrent neural network to cause local network activations; feeding a training input to the local shadow recurrent neural network to cause shadow network activations and applying learning rules to determine connection weights on said local shadow recurrent neural network leading to a shadow network output which correlates with a prediction of the local input; wherein said training input is unrelated to said local input; copying the determined connection weights from said local shadow network to said local network; b) repeating step a) until said main network is trained.
3. The method of claim 1 or 2, wherein each node is restricted to being connected to only neighbouring nodes via said proxy units.
4. The method of any one of the preceding claims, wherein each node hosts the same number of input units, hidden units, output units and proxy units.
5. The method according to any one of the preceding claims, further including the step of resetting each proxy unit to zero prior to the steps of feeding the local input and feeding the training input.
6. The method according to any one of the preceding claims, wherein said training input is randomly or pseudo-randomly selected.
7. The method according to claim 6, wherein said training input is a white noise signal.
8. The method according to any one of the preceding claims, including deleting the shadow networks after said main network has been trained.
9. A recurrent neural network constructed and trained in accordance with the method of any one of claims 1 to 8.
10. The recurrent neural network according to claim 9, wherein each node is a device which provides its own local input.
11. The recurrent neural network according to claim 10, wherein each device is a sensor device, the sensor reading providing the local input for the sensor's local recurrent neural network.
12. A method for determining a device failure in a recurrent neural network according to claim 10 or 11 , said method including: for each device, comparing the local input with the local output of the local recurrent neural network to ascertain any difference; wherein a failure is determined if the ascertained difference exceeds a predetermined threshold.
13. The method according to claim 12, wherein if a failure is determined for a device, replacing the local input for the local recurrent neural network of said device with the predicted local input.
PCT/AU2010/000720 2009-06-15 2010-06-11 Construction and training of a recurrent neural network WO2010144947A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2009902733A AU2009902733A0 (en) 2009-06-15 Construction and training of a recurrent neural network
AU2009902733 2009-06-15

Publications (1)

Publication Number Publication Date
WO2010144947A1 true WO2010144947A1 (en) 2010-12-23

Family

ID=43355591

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2010/000720 WO2010144947A1 (en) 2009-06-15 2010-06-11 Construction and training of a recurrent neural network

Country Status (1)

Country Link
WO (1) WO2010144947A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9400954B2 (en) 2012-07-30 2016-07-26 International Business Machines Corporation Multi-scale spatio-temporal neural network system
US9558442B2 (en) 2014-01-23 2017-01-31 Qualcomm Incorporated Monitoring neural networks with shadow networks
CN106656637A (en) * 2017-02-24 2017-05-10 国网河南省电力公司电力科学研究院 Anomaly detection method and device
WO2018005210A1 (en) * 2016-06-29 2018-01-04 Microsoft Technology Licensing, Llc Predictive anomaly detection in communication systems
CN106656637B (en) * 2017-02-24 2019-11-26 国网河南省电力公司电力科学研究院 A kind of power grid method for detecting abnormality and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5086479A (en) * 1989-06-30 1992-02-04 Hitachi, Ltd. Information processing system using neural network learning function
US20050192915A1 (en) * 2004-02-27 2005-09-01 Osman Ahmed System and method for predicting building thermal loads

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5086479A (en) * 1989-06-30 1992-02-04 Hitachi, Ltd. Information processing system using neural network learning function
US20050192915A1 (en) * 2004-02-27 2005-09-01 Osman Ahmed System and method for predicting building thermal loads

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KULAKOV, A. ET AL.: "Implementing artificial neural-networks in wireless sensor networks", IEEE/SARNOFF SYMPOSIUM ON ADVANCES IN WIRED AND WIRELESS COMMUNICATION, 18 April 2005 (2005-04-18), PRINCETON NJ, pages 94 - 97, XP010793755 *
OBST, O.: "`Poster Abstract: Distributed Fault Detection using a Recurrent Neural Network", IEEE INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS, 13 April 2009 (2009-04-13), SAN FRANCISCO, pages 373 - 374, XP031517511 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9400954B2 (en) 2012-07-30 2016-07-26 International Business Machines Corporation Multi-scale spatio-temporal neural network system
US9715653B2 (en) 2012-07-30 2017-07-25 International Business Machines Corporation Multi-scale spatio-temporal neural network system
US9715654B2 (en) 2012-07-30 2017-07-25 International Business Machines Corporation Multi-scale spatio-temporal neural network system
US9558442B2 (en) 2014-01-23 2017-01-31 Qualcomm Incorporated Monitoring neural networks with shadow networks
WO2018005210A1 (en) * 2016-06-29 2018-01-04 Microsoft Technology Licensing, Llc Predictive anomaly detection in communication systems
CN106656637A (en) * 2017-02-24 2017-05-10 国网河南省电力公司电力科学研究院 Anomaly detection method and device
CN106656637B (en) * 2017-02-24 2019-11-26 国网河南省电力公司电力科学研究院 A kind of power grid method for detecting abnormality and device

Similar Documents

Publication Publication Date Title
Benveniste et al. Diagnosis of asynchronous discrete-event systems: a net unfolding approach
US9208432B2 (en) Neural network learning and collaboration apparatus and methods
Hu et al. Robust recurrent neural network modeling for software fault detection and correction prediction
JP2008537262A (en) Decision support method and system
Paskin et al. Robust probabilistic inference in distributed systems
Paskin et al. A robust architecture for distributed inference in sensor networks
Huang Casting the wireless sensor net
Dutta et al. Trio: enabling sustainable and scalable outdoor wireless sensor network deployments
Lee et al. Fault detection of wireless sensor networks
US20060168195A1 (en) Distributed intelligent diagnostic scheme
Shu et al. Wireless sensor network lifetime analysis using interval type-2 fuzzy logic systems
US9563842B2 (en) Structural plasticity in spiking neural networks with symmetric dual of an electronic neuron
Wang et al. Adaptive routing for sensor networks using reinforcement learning
US9355331B2 (en) Extracting salient features from video using a neurosynaptic system
CN105394817B (en) A kind of control device with prompting function, electronic cigarette and control method
Tang et al. Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays
Welsh et al. Towards requirements aware systems: Run-time resolution of design-time assumptions
US6363420B1 (en) Method and system for heuristically designing and managing a network
WO2010148185A2 (en) Automated control of a power network using metadata and automated creation of predictive process models
Banerjee et al. Effective fault detection and routing scheme for wireless sensor networks
Ji et al. Distributed information-weighted Kalman consensus filter for sensor networks
Paek et al. Image-based environmental monitoring sensor application using an embedded wireless sensor network
Nguyen et al. A novel method for worm containment on dynamic social networks
Du et al. On sweep coverage with minimum mobile sensors
Jonsson et al. Food web structure affects the extinction risk of species in ecological communities

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10788492

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10788492

Country of ref document: EP

Kind code of ref document: A1