WO2000008599A2 - Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres - Google Patents

Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres Download PDF

Info

Publication number
WO2000008599A2
WO2000008599A2 PCT/DE1999/002014 DE9902014W WO0008599A2 WO 2000008599 A2 WO2000008599 A2 WO 2000008599A2 DE 9902014 W DE9902014 W DE 9902014W WO 0008599 A2 WO0008599 A2 WO 0008599A2
Authority
WO
WIPO (PCT)
Prior art keywords
ust
filename
parameter
std
equiv
Prior art date
Application number
PCT/DE1999/002014
Other languages
German (de)
English (en)
Other versions
WO2000008599A3 (fr
Inventor
Ralf Neuneier
Hans-Georg Zimmermann
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Priority to EP99945870A priority Critical patent/EP1021793A2/fr
Priority to US09/529,195 priority patent/US6493691B1/en
Priority to JP2000564162A priority patent/JP2002522832A/ja
Publication of WO2000008599A2 publication Critical patent/WO2000008599A2/fr
Publication of WO2000008599A3 publication Critical patent/WO2000008599A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Definitions

  • the invention relates to an arrangement of interconnected computing elements, methods for computer-aided determination of a dynamic which is the basis of a dynamic process, and a method for computer-assisted training of an arrangement of interconnected computing elements.
  • a dynamic process is usually described by a state transition description, which is not visible to an observer of the dynamic process, and an output equation, which describes observable quantities of the technical dynamic process.
  • a dynamic system 200 is subject to the influence of an external input variable u of a predeterminable dimension, an input variable u at a time t being designated u-j-:
  • the input variable u- ⁇ at a time t causes one
  • a state transition of the inner state s of the dynamic process is caused and the state of the dynamic process changes to a subsequent state s - ⁇ + i at a subsequent time t + 1.
  • f (.) denotes a general mapping rule
  • An output variable y ⁇ observable by an observer of the dynamic system 200 at a point in time t depends on the input variable u- ⁇ and the internal state s- ⁇ .
  • the output variable y ⁇ (y ⁇ € 5R n ) is predeterminable dimension n.
  • g (.) denotes a general mapping rule
  • an arrangement of interconnected computing elements in the form of a neural network of interconnected neurons is introduced in [1]. puts.
  • the connections between the neurons of the neural network are weighted.
  • the weights of the neural network are summarized in a parameter vector v.
  • An inner state of a dynamic system which is subject to a dynamic process, therefore depends on the input variable u ⁇ and the inner state of the preceding point in time s- ⁇ and the parameter vector v in accordance with the following regulation:
  • NN denotes a mapping rule specified by the neural network.
  • TDRNN Time Delay Recurrent Neural Network
  • the TDRNN is trained with the training data record. An overview of various training methods can also be found in [1].
  • T is a number of times taken into account.
  • a so-called neural autoassociator is known from [2] (cf. FIG. 3).
  • the autoassociator 300 consists of an input layer 301, three hidden layers 302, 303, 304 and an output layer 305.
  • the input layer 301 as well as a first hidden layer 302 and a second hidden layer 303 form a unit with which a first non-linear coordinate transformation g can be carried out.
  • the second hidden layer 303 forms, together with a third hidden layer 304 and the output layer 305, a second unit with which a second nonlinear coordinate transformation h can be carried out.
  • This five-layer neural network 300 known from [2] has the property that an input variable xt is transformed to an internal system state in accordance with the first nonlinear coordinate transformation g.
  • the internal system state is essentially transformed back to the input variable x.
  • the aim of this known structure is to map the input variable x ⁇ in a first state space X to the inner state sn in a second state space S, the dimension of the second state space Dim (S) should be smaller than the dimension of the first State space Dim (X) to achieve data compression in the hidden layer of the neural network.
  • the back transformation into the first state space X corresponds to a decompression.
  • [3] also provides an overview of the basics of neural networks and the possible uses of neural networks in the area of economics.
  • the invention is therefore based on the problem of specifying an arrangement of interconnected computing elements with which modeling of a dynamic system which is subject to noise is possible and which arrangement is not subject to the disadvantages of the known arrangements.
  • the invention is based on the problem of specifying a method for computer-aided determination of a dynamic, which is the basis of a dynamic process, for dynamic processes which can only be determined with insufficient accuracy using known methods.
  • the arrangement of interconnected computing elements has the following features: a) input computing elements to which time series values, which each describe a state of a system at a time, can be supplied, b) transformation computing elements for transforming the time series values into a predetermined space, which Trans formation computing elements are connected to the input computing elements, c) in which the transformation computing elements are connected to one another in such a way that transformed signals can be tapped at the transformation computing elements, at least three transformed signals each relating to successive times, d) composite computing elements , each of which is connected to two transformation computing elements, e) a first output computing element which is connected to the transformation computing elements, an output signal being able to be tapped from the first output computing element, and f) a second output Computing element, which is connected to the composite computing elements and by using which a predetermined condition can be taken into account when training the arrangement.
  • a method for computer-aided determination of dynamics which is based on a dynamic process, comprises the following steps: a) the dynamic process is described by a time series with time series values in a first state space, at least one first time series value representing a state of the dynamic process at a first point in time and a second time series value describes a state of the dynamic process at a second point in time, b) the first time series value is transformed into a second state space, c) the first time series value in the second state space is subjected to a mapping to a second time series value in the second state space, i.e.
  • the second time series value in the second state space is transformed back into the first state space, e) the transformations and the mapping are carried out in such a way that the time series values in the second state dynamic process described in space satisfies a predetermined condition, f) the dynamics of the dynamic process are determined from the time series values in the second state space.
  • the arrangement receives the input signal and the arrangement determines a first output signal from which the dynamic is determined.
  • the arrangement has the following structure: a) the dynamic process is described by a time series with time series values in a first state space, with at least a first time series value a state of the dynamic process at a first point in time and a second time series value a state of the dynamic process at a second point in time, b) the first time series value is transformed into a second state space, c) the first time series value in the second state space is subjected to a mapping to a second time series value in the second state space, d) the second time series value in the second State space is transformed back into the first state space, e) the transformations and mapping are carried out in such a way that the dynamic process described by the time series values in the second state space satisfies a given condition, f) the dynamics of the
  • the arrangement is carried out using predetermined training data
  • the arrangement has the following components: a) the dynamic process is described by a time series with time series values in a first state space, at least one first time series value describing a state of the dynamic process at a first point in time and a second time series value describing a state of the dynamic process at a second point in time, b) the first Time series value is transformed into a second state space, c) the first time series value in the second state space is subjected to a mapping to a second time series value in the second state space, d) the second time series value in the second state space is transformed back into the first state space, e) the transformations and the mapping takes place in such a way that the dynamic process described by the time series values in the second state space satisfies a predetermined condition, f) the dynamics of the dynamic process are determined from the time series values in the second state space.
  • the invention makes it possible to model a dynamic system, which comprises a dynamic process, by means of a smoother trajectory in a new state space, so that its further development becomes easier to predict at a subsequent point in time.
  • the invention achieves a better distinction between noise and the actual dynamics of the dynamic process.
  • the transformation computing elements are grouped in a first hidden layer and a second hidden layer, at least some of the transformation computing elements of the first hidden layer and the transformation computing elements of the second hidden layer being connected to one another.
  • At least some of the computing elements are preferably artificial neurons.
  • the transformation computing elements can be connected to one another in such a way that transformed signals can be tapped from the transformation computing elements, at least four transformed signals each relating to successive points in time.
  • any number of successive times can be affected by the transformed signals.
  • the input computing elements are assigned to an input layer in such a way that the input layer has a plurality of partial input layers, with each partial input layer being able to be supplied with at least one input signal which describes a system state at a time.
  • the transformation computing elements of the first hidden layer can be grouped into hidden sub-layers in such a way that the transformation computing elements of each hidden sub-layer are each connected to input computing elements of one partial input layer.
  • the connections between the computing elements can be designed and weighted variably.
  • the connections between the composite computing elements and the second output computing elements are invariant.
  • connections have the same weight values.
  • the condition can be a given smoothness of the dynamic process in the room or a Lyapunov zero condition.
  • the dynamic process can be a dynamic process in a reactor, in particular in a chemical reactor, or it can also be a dynamic process for modeling a traffic system, in general any dynamic process that takes place in a technical dynamic system.
  • the arrangement or the methods can be used in the context of modeling a financial market. In general, the procedures and the arrangements are very well suited to forecast macroeconomic dynamics.
  • the time series values can be determined from physical signals.
  • Figure 2 is a sketch of a general description of a dynamic system
  • FIG. 3 shows a sketch of an autoassociator according to the prior art
  • FIG. 4 shows a sketch of a chemical reactor, from which quantities are measured, which are processed further with the arrangement;
  • FIG. 5 shows a sketch of an arrangement of a TDRNN which is unfolded over time with a finite number of states
  • FIG. 6 shows a sketch on the basis of which a spatial transformation is explained
  • FIGS. 7a and b show a sketch of a course of a dynamic process in a state space with noise (FIG. 7a) and without noise (FIG. 7b);
  • FIG. 8 shows an arrangement according to an exemplary embodiment, to which further external input variables can be fed
  • FIG. 9 shows a sketch of a spatial transformation on the basis of which an exemplary embodiment is explained in more detail
  • Figure 10 is a sketch of a traffic control system, which is modeled with the arrangement in the context of a second embodiment.
  • the chemical reactor 400 shows a chemical reactor 400 which is filled with a chemical substance 401.
  • the chemical reactor 400 comprises a stirrer 402 with which the chemical substance 401 is stirred. Further chemical substances 403 flowing into the chemical reactor 400 react for a predeterminable period in the chemical reactor 400 with the chemical substance 401 already contained in the chemical reactor 400. A substance 404 flowing out of the reactor 400 is transferred from the chemical reactor 400 derived an output.
  • the stirrer 402 is connected via a line to a control unit 405, with which a stirring frequency of the stirrer 402 can be set via a control signal 406.
  • a measuring device 407 is also provided, with which concentrations of chemical substances contained in chemical substance 401 are measured.
  • Measurement signals 408 are fed to a computer 409, in which
  • Computer 409 is digitized via an input / output interface 410 and an analog / digital converter 411 and stored in a memory 412.
  • a processor 413 like the memory 412, is connected to the analog / digital converter 411 via a bus 414.
  • the calculator 409 is also on the
  • Input / output interface 410 connected to the controller 405 of the stirrer 402 and thus the computer 409 controls the stirring frequency of the stirrer 402.
  • the computer 409 is also connected via the input / output interface 410 to a keyboard 415, a computer mouse 416 and a screen 417.
  • the chemical reactor 400 as a dynamic technical system 200 is therefore subject to a dynamic process.
  • the chemical reactor 400 is described by means of a status description.
  • the input variable u ⁇ consists of an indication of the temperature prevailing in the chemical reactor 400 as well as the pressure prevailing in the chemical reactor 400 and the stirring frequency set at the time t.
  • the input variable is thus a three-dimensional vector.
  • the aim of the modeling of the chemical reactor 400 described below is to determine the dynamic development of the substance concentrations in order to be efficient To enable generation of a predefinable target substance to be produced as outflowing substance 404.
  • FIG. 5 shows the known TDRNN as a neural network 500 that is deployed over a finite number of times.
  • the neural network 500 shown in FIG. 5 has an input layer 501 with three partial input layers 502, 503 and 504, each of which contains a predeterminable number of input computing elements, to which input variables ut have a predefinable time t, i.e. time series values described below can be applied.
  • Input computing elements i.e. Input neurons are connected via variable connections to neurons of a predefinable number of hidden layers 505.
  • Neurons of a first hidden layer 506 are connected to neurons of the first partial input layer 502. Furthermore, neurons of a second hidden layer 507 are connected to neurons of the second input layer 503. Neurons of a third hidden layer 508 are connected to neurons of the third partial input layer 504.
  • the connections between the first partial input layer 502 and the first hidden layer 506, the second partial input layer 503 and the second hidden layer 507 and the third partial input layer 504 and the third hidden layer 508 are in each case the same.
  • the weights of all connections are each contained in a first connection matrix B '.
  • Neurons of a fourth hidden layer 509 are connected with their inputs to outputs of neurons of the first hidden layer 502 according to a structure given by a second connection matrix A2. Furthermore, outputs of the neurons of the fourth hidden layer 509 are connected to inputs of neurons of the second hidden layer 507 according to a structure given by a third connection matrix A ⁇ _.
  • neurons of a fifth hidden layer 510 are connected with their inputs according to a structure given by the third connection matrix A2 to outputs of neurons of the second hidden layer 507. Outputs of the neurons of the fifth hidden layer 510 are connected to inputs of neurons of the third hidden layer 508 according to a structure given by the third connection matrix A ⁇ _.
  • connection structure is equivalent to a sixth hidden layer 511, which are connected to outputs of the neurons of the third hidden layer 508 according to a structure given by the second connection matrix A2 and according to a structure given by the third connection matrix A * ] _ to neurons one seventh hidden layer 512.
  • Neurons of an eighth hidden layer 513 are in turn connected to neurons of the seventh hidden layer 512 according to a structure given by the first connection matrix A2 and via connections according to the third connection matrix A] _ to neurons of a ninth hidden layer 514.
  • the details in the indices in the respective strata indicate the time t, t-1, t-2, t + 1, t + 2, to which each can be tapped or fed at the exits of the respective strata Obtain signals (u - (-, u - ⁇ + i, u t + 2) • *
  • An output layer 520 has three sub-output layers, a first sub-output layer 521, a second sub-output layer 522 and a third sub-output layer 523. Neurons of the first partial output layer 521 are connected to neurons of the third hidden layer 508 according to a structure given by an output connection matrix C 1 . Neurons of the second partial output layer are likewise connected to neurons of the eighth hidden layer 512 in accordance with the structure given by the output connection matrix C. Neurons of the third partial output layer 523 are connected to neurons of the ninth hidden layer 514 according to the output connection matrix C.
  • the output variables for a time t, t + 1, t + 2 can be tapped at the neurons of the partial output layers 521, 522 and 523 (y ⁇ ,
  • each layer or each sub-layer has a predeterminable number of neurons, i.e. Computing elements.
  • connection matrices are of any dimension and each contain the weight values for the corresponding connections between the neurons of the respective layers.
  • the arrangement shown in FIG. 1 a has an input layer 100 with three partial input layers 101, 102 and 103, with each partial input layer 101, 102, 103 each having time series values xt- l ? ⁇ t ' x t + l can be fed at a time t-1, t or t + 1.
  • the part entry layers 101, 102, 103 of the input layer 100 are each connected via connections according to a fourth connection matrix A to neurons of a first hidden layer 110, each having three sub-layers 111, 112 and 113 of the first hidden layer 110.
  • the neurons of the sub-layers 111, 112 and 113 of the first hidden layer 110 are provided with neurons of a second hidden layer 120, which in turn are three sub-layers of the second hidden layer 120, a first sub-layer
  • connection matrix B a fifth connection matrix
  • neurons of a third hidden layer 130 are connected to the second hidden layer 120, which in turn has three sub-layers 131, 132 and 133.
  • the neurons of the third hidden layer 130 are connected to neurons of a first output layer 140, which in turn has three sub-layers 141, 142 and 143.
  • a second output layer 150 with a first partial output layer 151 and a second partial output layer 152 of the second output layer 150 is provided.
  • a first partial output layer 151 of the second output layer 150 is connected to the neurons of the first partial layer 121 of the second hidden layer 120 and of the second partial layer 122 of the second hidden layer 120.
  • the second sub-layer 152 of the second output layer 150 is on the one hand with the second sub-layer 122 and on the other connected to the third sub-layer 123 of the second hidden layer 120.
  • each of the sub-layers of the second output layer 150 there is the difference between the “inner” system states of two successive times t-1, t and t + 1, which are represented in the sub-layers 121, 122 and 123 of the second hidden layer 120 of the neural network , educated.
  • a first difference (s- f - - st-_ ⁇ ) can thus be tapped at an output 155 of the first partial output layer 151 of the second output layer 150 and a second difference (s-
  • a third output layer 160 is connected to the two partial output layers 151, 152 of the second output layer 150.
  • cost function F to measure the curvature of the trajectory to be described in the second state space S, which cost function F is formed in accordance with the following regulation:
  • a total cost function E ' is formed, using which the neural network is trained with a training data set which is obtained by measuring variables in the chemical reactor 400, the total cost function E 1 being formed in accordance with the following regulation:
  • the back propagation method is used as the training method.
  • the training data set is obtained from the chemical reactor 400 in the following manner.
  • Concentrations are measured using the measuring device 407 at predetermined input variables and fed to the computer 409, digitized there and grouped as time series values xt in a memory together with the corresponding input variables which correspond to the measured variables.
  • two further hidden layers 170, 171 are provided, a first further layer 170 with neurons of the first sub-layer 121 hiding the second Layer 120 connected according to a structure given by an eighth connection matrix E.
  • Outputs of the neurons of the first further layer 170 are connected to inputs of neurons of the second sub-layer 122 of the second hidden layer 120 in accordance with a structure which is predetermined by a ninth connection matrix F.
  • the inputs of the second further layer 171 are connected to outputs of the neurons of the second sub-layer 122 of the second hidden layer 120 according to the structure given by the eighth connection matrix E.
  • connection matrix F outputs of neurons of the second further layer 171 are connected to inputs of neurons of the third sub-layer 122 of the second hidden layer 120.
  • the other elements in FIG. 1b are connected to one another in the same way as the components from FIG.la with the same designation. Elements in FIG. 1b with dashed lines are not required in the arrangement of FIG. 1b for training using the training data set and the total cost function E 1 .
  • the arrangement from FIG. 1b is trained using the training data set and the total cost function.
  • a time profile 601 of the dynamic process in a first space X is shown in FIG.
  • Time series values x ⁇ in the first space X are transformed into a second state space S using the first mapping function g (.) Such that the course of the dynamic process in the second room S satisfies a given condition better than the course of the dynamic process in the first room X.
  • the condition specified in the exemplary embodiment is that the smoothness of a trajectory is optimized as far as possible. This clearly means that an image in the first space X from a time series value xt at a first time t to a time series value xt + l of a subsequent time t + 1 is clearly subject to a large amount of noise and has only a small structure.
  • a mapping f of a value s- ⁇ in the second space S at a time t, to a value s - ⁇ + i at a subsequent time t + 1 should have less noise and the largest possible structure.
  • a mapping rule g: X - S is sought, with which the first space X is mapped into a second space S such that the dynamics of the process in the second space S, described by the trajectory 602, has a smoother course than that Trajectory in the first room X.
  • a reverse transformation rule h S - X is sought from the second room S into the first room X, the following applies:
  • the dynamic mapping f is determined in the second space S such that:
  • the arrangement from FIG. 1b trained according to the training method described above is used to determine chemical quantities used in the chemical reactor 400 in such a way that for an input variable at a point in time t-1, forecast variables x-t + i and xt are determined in an application phase by the arrangement, which subsequently serve as control variables after a possible preparation of the determined variables as control variables 420, 421 the control means 405 for controlling the stirrer 402 or an inflow control device 430 for controlling the inflow of further chemical substances 403 into the chemical reactor 400 (cf. FIG. 4).
  • the arrangement and the training of the arrangement furthermore ensure that a trajectory 702, which is determined in a state space 701 and is subject to noise, is not suitable in a customary method and a resulting scaling 703 for a useful determination of a dynamic underlying the process is (see Fig.7a).
  • a dynamic along a changed scaling 710 is now achieved in the state space 700 such that the course of the trajectory 702 has a greater smoothness and thus noise the determination the dynamics are no longer hampered to the extent (see Fig. 7b).
  • FIG. 1 Components from FIGS. 1 a and 1b are provided with the same reference symbols for the same configuration.
  • the arrangement has a first external input layer 801 and a second external input layer 802, to the neurons of which additional external variables can be supplied at a time t-1 (u - ⁇ - i) or external variables u ⁇ at a second time t.
  • a first external input layer 801 and a second external input layer 802 to the neurons of which additional external variables can be supplied at a time t-1 (u - ⁇ - i) or external variables u ⁇ at a second time t.
  • a first preprocessing layer 810 and a second preprocessing layer 811 According to the principle of split weights described above are each Neurons of the first external layer 801 and the second external layer 802 are connected to a first preprocessing layer 810 and a second preprocessing layer 811 via a structure according to a tenth connection matrix G.
  • the neurons of the first preprocessing layer 810 and the second preprocessing layer 811 are each connected to inputs of the first further layer 170 and the second further layer 171 according to a structure given by an eleventh connection matrix H.
  • the eleventh connection matrix H represents a diagonal matrix.
  • FIG. 9 shows a road 900 which is used by cars 901, 902, 903, 904, 905 and 906.
  • Conductor loops 910, 911 integrated into the road 900 receive electrical signals in a known manner and feed the electrical signals 915, 916 to a computer 920 via an input / output interface 921.
  • the electrical signals are digitized in a time series and in a memory 923, which is connected via a bus
  • control signals 951 are fed to a traffic control system 950, from which a predetermined speed setting 952 can be set in the traffic control system 950, or further details of traffic regulations which are transmitted to the drivers of the vehicles 901, 902, 903, 904 via the traffic control system 950. 905 and 906 are shown.
  • the following local state variables are used for traffic modeling:
  • the local state variables are measured as described above using the conductor loops 910, 911.
  • variables (v (t), p (t), q (t)) thus represent a state of the technical system “traffic” at a specific point in time t.
  • These variables are used to evaluate r (t) of a current state, for example regarding traffic flow and homogeneity. This assessment can be quantitative or qualitative.
  • the traffic dynamics are modeled in two phases:
  • Control signals 951 are formed from forecast variables ascertained in the application phase and are used to indicate which speed limitation is to be selected for a future period (t + 1). Some alternatives to the exemplary embodiment described above are shown below.
  • condition is not limited to a smoothness condition.
  • a Lyapunov zero condition can also be used.
  • the total cost function E 'and the cost function F are changed accordingly, as will be explained below.
  • Spatial transformations g, h are created in such a way that the time is “stretched” for the sub-processes that run faster and “compressed” for the sub-processes that run slower.
  • the arrangement can be trained with a smaller number of training data, since the arrangement is already reduced to a specific subspace, the subspace of an emergent system. This speeds up the training phase or considerably reduces the computing effort for training the arrangement.
  • LearnCtrl ⁇ StopControl (be Stochastic Epoc Limit ⁇ Stochastic ⁇ Active ⁇ F ⁇
  • EtaCtrl ⁇ ) Mode ⁇ be EtaSchedule AnySave ⁇ EtaSchedule ⁇ f ⁇ le_name ⁇ f.TrueBatch.dat ⁇ S itchTime ⁇ 10 ⁇
  • PatternSelection ⁇ be Sequential Quickprop ⁇ SubSample ⁇ Decay ⁇ 0.050000 ⁇
  • Active ⁇ F ⁇ Set (Validation ⁇ EtaNull ⁇ 1.000000 ⁇ File ⁇ trace ⁇ Delta ⁇ 0.002000 ⁇ MaxSteps (10 ⁇ Active ⁇ F ⁇ Omega (0.800000 ⁇ TrustRegion ⁇ T ⁇ AnyLoad ⁇ DerivEps ⁇ 0.000000 ⁇ f ⁇ le_name ⁇ BatchSize ⁇ 2147483647 ⁇ f.GeneticWeightSelect.dat ⁇
  • Searc Control ⁇ be NoNoise ⁇ earchStrategy ⁇ AdaptiveUmform ⁇ be HillClimberControl NoiseEta ⁇ 1.000000 ⁇ HillClimberControl ⁇ DampmgFactor ⁇ 1.000000 ⁇
  • LipComplexity ⁇ 0.000000 ⁇ OptComplexity ⁇ 2.000000 ⁇ Norm ⁇ NoNorm ⁇ testVal (dead) -testVal (alive) ⁇ 0.000000 ⁇ ust .unfoldM ⁇ nus4 ⁇ ActFunction ⁇
  • ErrorFunc ⁇ ClearnMode ⁇ NoClearn ⁇ be LnCosh ⁇
  • Parameter 0.500000 be square ⁇ plogistic ⁇
  • NoiseMode ⁇ be NoNoise ptanh ⁇ AdaptiveUniform ⁇ parameter ⁇ 0. 500000)
  • NewNoiseLevel (-0.535698 parameter ⁇ 0. 500000 ⁇ pid ⁇ parameter ⁇ 0. 500000 ⁇ SaveNoiseLevel ⁇ )
  • Norm ⁇ NoNorm ⁇ be ld ⁇ plogistic (ust .sqrunfoldMmus3 ⁇ parameter ⁇ 0.500000 ⁇ ActFunction ⁇ be square ptanh ⁇ plogistic ⁇ parameter (0.500000 ⁇
  • Parameter ⁇ 0.500000 ⁇ be id ⁇ plogistic ⁇ pid ⁇ parameter ⁇ 0.500000 ⁇
  • NewNoiseLevel (0.000000 ⁇ NewNoiseLevel ⁇ 0.000000
  • ErrorFunc ⁇ be LnCosh ust .preproMmus2 ⁇
  • NewNoiseLevel ⁇ 0.000000 parameter ⁇ 0.500000 ⁇ pid ⁇
  • Norm ⁇ NoNorm ⁇ be id ⁇ plogistic (ust.sqrunfoldMinusl ⁇ parameter ⁇ 0. 500000 ⁇ ActFunction ⁇ be square ptanh (plogistic ⁇ parameter ⁇ 0.500000 ⁇
  • Parameter ⁇ 0.500000 ⁇ be id plogistic ⁇ pid ⁇ parameter ⁇ 0.500000 ⁇
  • NoiseMode ⁇ be NoNoise pid ⁇ AdaptiveUmform ⁇ parameter (0.500000 ⁇
  • NoiseMode ⁇ be NoNoise ptanh ⁇ AdaptiveUmform ⁇ parameter ⁇ 0.500000
  • NewNoiseLevel ⁇ 0.000000 parameter ⁇ 0.500000 ⁇
  • Norm ⁇ NoNorm ⁇ be id ⁇ plogistic ⁇ ust.sqrunfoldPlusl (parameter (0. 500000 ActFunction ⁇ be square ptanh (plogistic ⁇ parameter ⁇ 0.500000
  • Norm ⁇ noNorm ⁇ be square ⁇ plogistic ⁇ ust. flowMmus2M ⁇ nusl (parameter ⁇ 0.500000 ⁇ ActFunction ⁇ be id ptanh ⁇ plogistic ⁇ parameter ⁇ 0.500000 ⁇ parameter ⁇ 0.500000 ⁇ > pid ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ parameter ⁇ 0.500000 ⁇ pid ⁇ Norm ⁇ NoNorm ⁇ parameter ⁇ 0.500000 ⁇ ⁇ ust.flowPluslPlus2 ⁇ ActFunction ⁇
  • Norm ⁇ noNorm ⁇ be square ⁇ plogistic ⁇ ust. flowMmuslNull ⁇ parameter ⁇ 0.500000 ⁇ ActFunction ⁇ ) let id ptanh ⁇ plogistic ⁇ parameter ⁇ 0.500000 ⁇ parameter ⁇ 0.500000 ⁇ ) ⁇ pid ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ parameter ⁇ 0.500000 ⁇ pid ⁇ Norm ⁇ NoNorm ⁇ parameter (0.500000 ⁇ ⁇ ust.flowPlus2Plus3 ⁇ ActFunction ⁇
  • Norm ⁇ NoNorm ⁇ be id ⁇ plogistic ⁇ ust .squareFlowMinuslNull (parameter ⁇ 0.500000 ⁇ pid ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ parameter ⁇ 0.500000 ⁇ ) ⁇ pid ⁇ Norm ⁇ NoNorm ⁇ parameter ⁇ 0.500000 ⁇ > ust .lnSumSquareFlowMmus4M ⁇ nus2 ⁇ ActFunction ⁇
  • ToleranceFlag F ⁇ Tolerance ⁇ 0.000000 ⁇ ToleranceFlag (F ⁇ Weightmg ⁇ 1.000000 ⁇ Tolerance ⁇ 0.000000 ⁇ Weightmg ⁇ 1.000000 ⁇ ust .InSumSquareFlowMinuslNullPlusl ⁇ ActFunction ⁇ ust.lnSumSquareFlowPluslPlus2Plus3 ⁇ 0.5 In ActFunction ⁇ pl ⁇ parameter ⁇ 0.500000 ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ pid ⁇ parameter ⁇ 0.500000 ⁇ pid ⁇ ) parameter t 0.500000 ⁇ ust.smoothnessNull ⁇ ActFunction ⁇ ust.smoothnessPlus2 (se exp ActFunction ⁇ plogistic ⁇ be exp parameter ⁇ 0.500000 ⁇ plogistic ⁇ parameter ⁇ 0.500000 ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ ptanh ⁇ ) parameter ⁇ 0.500000 ⁇
  • ToleranceFlag F ⁇ Tolerance ⁇ 0.000000 ⁇ ToleranceFlag ⁇ F ⁇ Weightmg ⁇ 1.000000 ⁇ Tolerance (0.000000 ⁇ Weightmg (1.000000 ⁇ ust .lnSumSquareFlowNullPluslPlus2 ⁇ ) ActFunction ⁇ ust. LnSumSquareFlowPlus2Plus3Plus4 ⁇ 0.500 In ActFunction parameter ⁇ pl00 ⁇ parameter ⁇ 0.500000 ⁇ ptanh ⁇ parameter ⁇ 0.500000 ⁇ ptanh ⁇ parameter ⁇ 0.500000) pid ⁇ ) parameter ⁇ 0.500000 ⁇ pid ⁇ parameter ⁇ 0.500000 ⁇
  • ErrorFunc ⁇ Filename ⁇ std ⁇ be square ⁇ Ixl ⁇ SaveWeightsLocal ⁇ parameter ⁇ 0.050000) Filename ⁇ std ⁇ LnCosh ⁇ Alive ⁇ T ⁇ parameter ⁇ 1.000000 ⁇ EQUIV_WtFreeze ⁇ F ⁇ EQUIV_AllowPrun ⁇ ng ⁇ F ⁇
  • WeightWatcher ⁇ Connectors ⁇ Active ⁇ F ⁇ ust.no ⁇ seInputMmus4 ⁇ EQUIV_MaxWe ⁇ ght ⁇ 1.000000 ⁇ WeightWatcher ⁇ EQUIV_M ⁇ nWe ⁇ ght ⁇ 0.000000 ⁇
  • EQUIV_EtaMod ⁇ f ⁇ er Alive ⁇ T) EQUIV_EtaMod ⁇ f ⁇ er ⁇ 1. 000000 ⁇ EQUIV_WtFreeze ⁇ F ⁇ Norm_We ⁇ ghts ⁇ F ⁇ EQUIV_AllowPrumng ⁇ F ⁇ EQUIV_Penalty ⁇ WtDecay ⁇ EQUIV_EtaMod ⁇ f ⁇ er ⁇ 1.000000 ⁇ EQUIV_AllowGenet ⁇ cOpt ⁇ F ⁇ ust.prepronust_en_4w foldM ⁇ nus4 ⁇ EQUIVJMaxWeight ⁇ 1.000000 ⁇ WeightWatcher ⁇ EQUIV_M ⁇ nWe ⁇ ght ⁇ 0.000000 ⁇
  • EtaModifier ⁇ 1.000000 ⁇ ust. sqrunfoldM ⁇ nus3-> unfoldMmus3 ⁇ LoadWeightsLocal ⁇ ust. stateMmus3-> foldMmus3 ⁇
  • AllowGeneticOptimization ⁇ F ⁇ MmWeight ⁇ 0.000000 ⁇ Penalty ⁇ WtDecay ⁇ AllowPruning ⁇ F ⁇ LoadWeightsLocal ⁇ EtaModifier ⁇ 1.000000 ⁇ Filename ⁇ std ⁇ ust.mputM ⁇ nusl-> unfoldM ⁇ nusl ⁇ SaveWeightsLocal ⁇ FilWenameightsL ⁇
  • AllowGeneticOptimization (F ⁇ WtFreeze ⁇ T ⁇ Penalty ⁇ WtDecay ⁇ AllowGeneticOptimization ⁇ F ⁇ AllowPruning ⁇ F ⁇ Penalty ⁇ WtDecay ⁇ EtaModifier ⁇ 1.000000 ⁇ AllowPruning ⁇ F ⁇ EtaModifier ⁇ 1.000000 ⁇ ust .extPreproWtmoldl ⁇
  • AllowGeneticOptimization F ⁇ Penalty ⁇ WtDecay ⁇ ust.extPreproNull (AllowPruning ⁇ F ⁇ LoadWeightsLocal ⁇ EtaModifier ⁇ 1.000000 ⁇ Filename ⁇ std ⁇ > ⁇ ust. StateNull-> foldNull ⁇ SaveWeightsLocal ⁇ LoadWeightsLocal (Filename ⁇ std ⁇
  • AllowGeneticOptimization ⁇ F ⁇ Penalty ⁇ WtDecay ⁇ ust.h ⁇ ddenNull-> statePlusl ⁇ AllowPruning ⁇ F ⁇ LoadWeightsLocal ⁇ EtaModifier ⁇ 1.000000 ⁇ Filename ⁇ std ⁇ ust .vertPlusl_l (SaveWeightsLocal ⁇ WeightWatcher ⁇ Filename ⁇
  • AllowGeneticOptimization ⁇ F ⁇ MmWeight ⁇ 0.000000 ⁇ Penalty ⁇ WtDecay
  • AllowPruning ⁇ F ⁇ LoadWeightsLocal ⁇ EtaModifier (1.000000 ⁇ Filename (std ⁇ ust. ⁇ nputPlusl-> unfoldPlusl ⁇ SaveWeightsLocal ⁇ LoadWeightsdocal ⁇ Filename
  • AllowGeneticOptimization F ⁇ EQUIVJJtFreeze ⁇ F ⁇ Penalty ⁇ WtDecay ⁇ EQUIV_AllowPrumng ⁇ F ⁇ AllowPruning (F ⁇ EQUIV EtaModifier ⁇ 1. 000000 ⁇ EQUIVJUlowGeneticOpt ⁇ F ⁇ Penalty ⁇ WtDecay ⁇ EQUIV_Penalty (WtDecay ⁇ AllowPruning ⁇ F ⁇ ) EtaModifier (1.000000 ⁇ ust.sqrfoldPlus2-> foldPlus2 ⁇ LoadWeightsLocal ⁇ ust.averagePlus2 ⁇
  • AllowGeneticOptimization ⁇ F ⁇ EtaModifier (1.000000 ⁇ Penalty ⁇ WtDecay ⁇ ) AllowPruning (F ⁇ ust .m ⁇ nus4Mmus2_2 ⁇ EtaModifier ⁇ 1.000000 ⁇ WeightWatcher ⁇ Active ⁇ F ⁇ ust.plus3Plus4_l ⁇ MaxWeight ⁇ 0.000000m00WeightWatch
  • AllowGeneticOptimization ⁇ F ⁇ EtaModifier ⁇ 1.000000 ⁇ Penalty ⁇ WtDecay ⁇ AllowPruning (F) ust .m ⁇ nus2Null_lxn ⁇ EtaModifier ⁇ 1.000000 ⁇ WeightWatcher ⁇ ) Active ⁇ F ⁇ ust.m ⁇ nus3M ⁇ nusl_lxn ⁇ 0.00Weight00.00 (MaxWeight00)

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Complex Calculations (AREA)

Abstract

Selon l'invention, un signal d'entrée est transformé en un espace prédéterminé. Des éléments de calcul de transformation sont reliés les uns aux autres de telle sorte que des signaux transformés au niveau de ces éléments de calcul de transformation peuvent être prélevés, au moins trois signaux transformés concernant chacun des moments qui se succèdent. Des éléments de calcul composés sont chacun reliés à deux éléments de calcul de transformation. L'ensemble présenté comporte en outre un premier élément de calcul de sortie au niveau duquel un signal de sortie décrivant un état de système à un moment donné peut être prélevé. Le premier élément de calcul de sortie est relié aux éléments de calcul de transformation. L'ensemble comporte également un second élément de calcul de sortie qui est relié aux éléments de calcul composés et dont l'utilisation permet la prise en compte d'une condition prédéterminée lors d'un entraînement de l'ensemble.
PCT/DE1999/002014 1998-08-07 1999-07-01 Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres WO2000008599A2 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP99945870A EP1021793A2 (fr) 1998-08-07 1999-07-01 Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres
US09/529,195 US6493691B1 (en) 1998-08-07 1999-07-01 Assembly of interconnected computing elements, method for computer-assisted determination of a dynamics which is the base of a dynamic process, and method for computer-assisted training of an assembly of interconnected elements
JP2000564162A JP2002522832A (ja) 1998-08-07 1999-07-01 相互に結合された演算子の装置、ダイナミックプロセスに基づくダイナミクスをコンピュータのサポートにより検出するための方法、並びに、相互に結合された演算子の装置をコンピュータのサポートにより検出するための方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE19835923 1998-08-07
DE19835923.3 1998-08-07

Publications (2)

Publication Number Publication Date
WO2000008599A2 true WO2000008599A2 (fr) 2000-02-17
WO2000008599A3 WO2000008599A3 (fr) 2000-05-18

Family

ID=7876893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE1999/002014 WO2000008599A2 (fr) 1998-08-07 1999-07-01 Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres

Country Status (4)

Country Link
US (1) US6493691B1 (fr)
EP (1) EP1021793A2 (fr)
JP (1) JP2002522832A (fr)
WO (1) WO2000008599A2 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057648A2 (fr) * 2000-01-31 2001-08-09 Siemens Aktiengesellschaft Configuration d'elements de calcul interconnectes et procede de determination assistee par ordinateur du deuxieme etat d'un systeme dans un premier espace d'etat a partir d'un premier etat du systeme dans le premier espace d'etat
WO2003025851A2 (fr) * 2001-09-19 2003-03-27 Siemens Aktiengesellschaft Procede et dispositif de determination d'un premier etat courant d'une premiere suite temporelle de premiers etats respectifs d'un systeme a variation dynamique
WO2003079285A2 (fr) * 2002-03-20 2003-09-25 Siemens Aktiengesellschaft Procede et systeme ainsi que programme informatique dote de moyens de code de programme et produit de programme informatique servant a ponderer des grandeurs d'entree pour une structure neuronale, ainsi que structure neuronale associee
DE10324045B3 (de) * 2003-05-27 2004-10-14 Siemens Ag Verfahren sowie Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemverhaltens eines dynamischen Systems
WO2006061320A2 (fr) * 2004-12-10 2006-06-15 Siemens Aktiengesellschaft Procede, dispositif et programme informatique comportant des elements de code de programme et un produit de programme informatique pour la determination d'un etat systeme futur d'un systeme dynamique
EP2112568A3 (fr) * 2008-04-23 2011-05-11 Siemens Aktiengesellschaft Procédé de commande et/ou réglage assistées par ordinateur d'un système technique

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6735580B1 (en) * 1999-08-26 2004-05-11 Westport Financial Llc Artificial neural network based universal time series
WO2001072622A1 (fr) * 2000-03-29 2001-10-04 Mitsubishi Denki Kabushiki Kaisha Dispositif de commande de gestion d'un groupe d'ascenseurs
US7846736B2 (en) * 2001-12-17 2010-12-07 Univation Technologies, Llc Method for polymerization reaction monitoring with determination of entropy of monitored data
US8463441B2 (en) 2002-12-09 2013-06-11 Hudson Technologies, Inc. Method and apparatus for optimizing refrigeration systems
US7844942B2 (en) * 2006-06-12 2010-11-30 International Business Machines Corporation System and method for model driven transformation filtering
DE102007001026B4 (de) * 2007-01-02 2008-09-04 Siemens Ag Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems
CN101868803A (zh) * 2007-09-21 2010-10-20 科德博克斯计算机服务有限责任公司 神经元网络结构和操作神经元网络结构的方法
DE102008050207B3 (de) * 2008-10-01 2010-03-25 Technische Universität Dresden Verfahren zum Erstellen von Daten
US9015093B1 (en) 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US8775341B1 (en) 2010-10-26 2014-07-08 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444819A (en) * 1992-06-08 1995-08-22 Mitsubishi Denki Kabushiki Kaisha Economic phenomenon predicting and analyzing system using neural network
US5761386A (en) * 1996-04-05 1998-06-02 Nec Research Institute, Inc. Method and apparatus for foreign exchange rate time series prediction and classification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5282261A (en) * 1990-08-03 1994-01-25 E. I. Du Pont De Nemours And Co., Inc. Neural network process measurement and control
US5142612A (en) * 1990-08-03 1992-08-25 E. I. Du Pont De Nemours & Co. (Inc.) Computer neural network supervisory process control system and method
GB2266602B (en) * 1992-04-16 1995-09-27 Inventio Ag Artificially intelligent traffic modelling and prediction system
US5668717A (en) * 1993-06-04 1997-09-16 The Johns Hopkins University Method and apparatus for model-free optimal signal timing for system-wide traffic control

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444819A (en) * 1992-06-08 1995-08-22 Mitsubishi Denki Kabushiki Kaisha Economic phenomenon predicting and analyzing system using neural network
US5761386A (en) * 1996-04-05 1998-06-02 Nec Research Institute, Inc. Method and apparatus for foreign exchange rate time series prediction and classification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LUBENSKY D: "LEARNING SPECTRAL-TEMPORAL DEPENDENCIES USING CONNECTIONIST NETWORKS" INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH & SIGNAL PROCESSING. ICASSP,US,NEW YORK, IEEE, Bd. CONF. 13, 1988, Seiten 418-421, XP000129262 *
TAKASE ET AL: "TIME SEQUENTIAL PATTERN TRANSFORMATION AND ATTRACTORS OF RECURRENT NEURAL NETWORKS" PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS. (IJCNN),US,NEW YORK, IEEE, Bd. 3, 25. - 29. Oktober 1993, Seiten 2319-2322, XP000502253 ISBN: 0-7803-1422-0 *
TEMPLEMAN J N: "RACE NETWORKS A THEORY OF COMPETITIVE RECOGNITION NETWORKS BASED ON THE RATE OF REACTIVATION OF NEURONS IN CORTICAL COLUMNS" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS,US,NEW YORK, IEEE, Bd. II, 1988, Seiten 9-16, XP000744244 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057648A2 (fr) * 2000-01-31 2001-08-09 Siemens Aktiengesellschaft Configuration d'elements de calcul interconnectes et procede de determination assistee par ordinateur du deuxieme etat d'un systeme dans un premier espace d'etat a partir d'un premier etat du systeme dans le premier espace d'etat
WO2001057648A3 (fr) * 2000-01-31 2002-02-07 Siemens Ag Configuration d'elements de calcul interconnectes et procede de determination assistee par ordinateur du deuxieme etat d'un systeme dans un premier espace d'etat a partir d'un premier etat du systeme dans le premier espace d'etat
WO2003025851A2 (fr) * 2001-09-19 2003-03-27 Siemens Aktiengesellschaft Procede et dispositif de determination d'un premier etat courant d'une premiere suite temporelle de premiers etats respectifs d'un systeme a variation dynamique
WO2003025851A3 (fr) * 2001-09-19 2004-02-19 Siemens Ag Procede et dispositif de determination d'un premier etat courant d'une premiere suite temporelle de premiers etats respectifs d'un systeme a variation dynamique
WO2003079285A2 (fr) * 2002-03-20 2003-09-25 Siemens Aktiengesellschaft Procede et systeme ainsi que programme informatique dote de moyens de code de programme et produit de programme informatique servant a ponderer des grandeurs d'entree pour une structure neuronale, ainsi que structure neuronale associee
WO2003079285A3 (fr) * 2002-03-20 2004-09-10 Siemens Ag Procede et systeme ainsi que programme informatique dote de moyens de code de programme et produit de programme informatique servant a ponderer des grandeurs d'entree pour une structure neuronale, ainsi que structure neuronale associee
DE10324045B3 (de) * 2003-05-27 2004-10-14 Siemens Ag Verfahren sowie Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemverhaltens eines dynamischen Systems
US7464061B2 (en) 2003-05-27 2008-12-09 Siemens Aktiengesellschaft Method, computer program with program code means, and computer program product for determining a future behavior of a dynamic system
WO2006061320A2 (fr) * 2004-12-10 2006-06-15 Siemens Aktiengesellschaft Procede, dispositif et programme informatique comportant des elements de code de programme et un produit de programme informatique pour la determination d'un etat systeme futur d'un systeme dynamique
WO2006061320A3 (fr) * 2004-12-10 2007-04-19 Siemens Ag Procede, dispositif et programme informatique comportant des elements de code de programme et un produit de programme informatique pour la determination d'un etat systeme futur d'un systeme dynamique
EP2112568A3 (fr) * 2008-04-23 2011-05-11 Siemens Aktiengesellschaft Procédé de commande et/ou réglage assistées par ordinateur d'un système technique
US8160978B2 (en) 2008-04-23 2012-04-17 Siemens Aktiengesellschaft Method for computer-aided control or regulation of a technical system

Also Published As

Publication number Publication date
JP2002522832A (ja) 2002-07-23
WO2000008599A3 (fr) 2000-05-18
EP1021793A2 (fr) 2000-07-26
US6493691B1 (en) 2002-12-10

Similar Documents

Publication Publication Date Title
EP2106576B1 (fr) Procédé de commande et/ou de régulation d'un système technique assistées par ordinateur
EP1021793A2 (fr) Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres
EP2112568B1 (fr) Procédé de commande et/ou réglage assistées par ordinateur d'un système technique
AT511577B1 (de) Maschinell umgesetztes verfahren zum erhalten von daten aus einem nicht linearen dynamischen echtsystem während eines testlaufs
EP1145192B1 (fr) Configuration d'elements informatiques interconnectes, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique et procede d'apprentissage assiste par ordinateur d'une configuration d'elements informatiques interconnectes
EP1052558A1 (fr) Procédé et dispositif d'estimation d' état
DE112020003050T5 (de) Fehlerkompensation in analogen neuronalen netzen
EP1252566B1 (fr) Configuration d'elements de calcul interconnectes et procede de determination assistee par ordinateur du deuxieme etat d'un systeme dans un premier espace d'etat a partir d'un premier etat du systeme dans le premier espace d'etat
EP1327959B1 (fr) Réseau neuronal pour modéliser un système physique et procédé de construction de ce réseau neuronal
WO2004107066A1 (fr) Procede et programme informatique comportant des moyens de code de programme et programme informatique pour determiner un comportement futur d'un systeme dynamique
EP1145190B1 (fr) Ensemble de plusieurs elements de calcul relies entre eux, procede de determination assistee par ordinateur d'une dynamique se trouvant a la base d'un processus dynamique et procede pour l'entrainement assiste par ordinateur d'un ensemble d'elements de calcul relies entre eux
EP1428177A2 (fr) Procede et dispositif de determination d'un premier etat courant d'une premiere suite temporelle de premiers etats respectifs d'un systeme a variation dynamique
WO2000003355A2 (fr) Reseau neuronal, et procede et dispositif pour l'entrainement d'un reseau neuronal
DE10047172C1 (de) Verfahren zur Sprachverarbeitung
EP1384198A2 (fr) Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble
WO2006134011A1 (fr) Procede de traitement de donnees numeriques assiste par ordinateur
EP1190383B1 (fr) Procede de determination assistee par ordinateur de l'appartenance d'une grandeur d'entree donnee a un groupe
DE102016113310A1 (de) Verfahren zur Bewertung von Aussagen einer Mehrzahl von Quellen zu einer Mehrzahl von Fakten
EP1194890B1 (fr) Dispositif, procede, produit comportant un programme d'ordinateur et support de stockage lisible par ordinateur pour la compensation assistee par ordinateur d'un etat de desequilibre d'un systeme technique
WO2003079285A2 (fr) Procede et systeme ainsi que programme informatique dote de moyens de code de programme et produit de programme informatique servant a ponderer des grandeurs d'entree pour une structure neuronale, ainsi que structure neuronale associee
WO2000026786A1 (fr) Procede et systeme pour evaluer une chaine de markov modelisant un systeme technique
WO2022152683A1 (fr) Détermination d'une valeur de confiance d'un réseau de neurones artificiels
EP3710992A1 (fr) Réseau neuronal artificiel et procédé associé
DE4338141A1 (de) RIP - Entwicklungsumgebung basierend auf dem Regelbasierten - Interpolations - Verfahren
WO2005055133A2 (fr) Procede et dispositif et programme d'ordinateur comportant des moyens a code de programme, et produit programme d'ordinateur pour la determination d'un etat futur d'un systeme dynamique

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 1999945870

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 09529195

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWP Wipo information: published in national office

Ref document number: 1999945870

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1999945870

Country of ref document: EP