EP1384198A2 - Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble - Google Patents

Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble

Info

Publication number
EP1384198A2
EP1384198A2 EP01978185A EP01978185A EP1384198A2 EP 1384198 A2 EP1384198 A2 EP 1384198A2 EP 01978185 A EP01978185 A EP 01978185A EP 01978185 A EP01978185 A EP 01978185A EP 1384198 A2 EP1384198 A2 EP 1384198A2
Authority
EP
European Patent Office
Prior art keywords
filename
parameter
std
mlp
nopenalty
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01978185A
Other languages
German (de)
English (en)
Inventor
Caglayan Erdem
Achim Müller
Ralf Neuneier
Hans-Georg Zimmermann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Publication of EP1384198A2 publication Critical patent/EP1384198A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Definitions

  • the invention relates to a method and an arrangement for the computer-assisted mapping of several time-varying status descriptions and a method for training an arrangement for the computer-assisted mapping of several time-changing status descriptions.
  • a dynamic process is usually described by a state transition description, which is not visible to an observer of the dynamic process, and an output equation, which describes observable quantities of the technical dynamic process.
  • FIG.2a Such a structure is shown in Fig.2a.
  • a dynamic system 200 is subject to the influence of an external input variable u of predeterminable dimension, an input variable ut at time t being designated u- ⁇ :
  • the input variable u at a time t causes one
  • a state transition of the inner state s of the dynamic process is caused and the state of the dynamic process changes into a subsequent state s - ⁇ + i at a subsequent time t + 1.
  • f (.) denotes a general mapping rule
  • An output variable y observable by an observer of the dynamic system 200 at a time t depends on the input variable u and the internal state st.
  • the output variable y (yt s SR n ) is predeterminable dimension n.
  • g (.) denotes a general mapping rule
  • an inner state of a dynamic system which is subject to a dynamic process, depends on the input variable ut and the inner state of the previous point in time st and the parameter vector v in accordance with the following regulation:
  • NN denotes a mapping rule specified by the neural network.
  • Time Delay Recurrent Neural Network (TDRNN) is trained in a training phase in such a way that for each input variable ut a target variable yt is determined on a real dynamic system.
  • the tuple input variable, determined target variable
  • a large number of such training data form a training data set.
  • the successive tuples (ut-4 . Yf_ ⁇ ) (ut-3 ry _ 3 ), (ut-2> yf- 2 ) of the points in time (t-4, t-3, t-3, (7) of the training data set each have a predetermined time step.
  • the TDRNN is trained with the training data record. An overview of various training methods can also be found in [1].
  • T is a number of times taken into account.
  • [2] also provides an overview of the basics of neural networks and the possible uses of neural networks in the area of economics.
  • the invention is therefore based on the problem of specifying a method and an arrangement and a method for training an arrangement for computer-aided mapping of a plurality of time-varying state descriptions, with which a state transition description of a dynamic system can be described with improved accuracy and which arrangement and which methods do not Disadvantages of the known arrangements and methods are subject.
  • the method for the computer-aided mapping of several time-variable state descriptions, each with a time-variable state of a dynamic system Describing an associated point in time in a state space which dynamic system maps an input variable to an associated output variable has the following steps:
  • a) a first state description is mapped in a first state space to a second state description in a second state space by a first mapping
  • the second state description of a temporally earlier state is taken into account in the first mapping
  • the second state description second status description mapped to a third status description in the first status space, characterized in that d) the first status description is mapped by a third mapping to a fourth status description in the second status space, e) in the third mapping the fourth status description of a later status is taken into account and f) the fourth status description is mapped by a fourth image to the third status description, the images being adapted in such a way that the images of the first status description onto the third status description match the image de r Write the input variable to the associated output variable with a specified accuracy.
  • the arrangement for the computer-aided mapping of several time-variable state descriptions each of which describes a time-changing state of a dynamic system at an associated time in a state space, which dynamic system maps an input variable to an associated output variable, has the following components: a) with a first imaging unit, which is set up in such a way that a first state description in a the first state space can be mapped by a first mapping to a second state description in a second state space, b) and the first mapping unit is set up in such a way that in the first mapping the second state description of an earlier state can be taken into account, c) with a second mapping unit that is set up in such a way that the second status description can be mapped by a second image to a third status description in the first status space, characterized in that d) the arrangement has a third imaging unit which is set up in such a way that the first status description by a third Mapping can be mapped to a fourth status description in the second status space, e) and the third mapping unit is set
  • the method for training an arrangement for computer-aided formation of a plurality of time-variable state descriptions, each of which describes a time-variable state of a dynamic system at an associated point in time in a state space, which dynamic system maps an input variable to an associated output variable which arrangement has the following components : a) with a first mapping unit that is set up in such a way that a first status description in a first status area can be mapped by a first mapping to a second status description in a second status area, b) and the first mapping unit is set up in such a way that in the first mapping the second status description of an earlier status can be taken into account, c) with a second mapping unit, which is set up in such a way that the second status description can be mapped by a second mapping to a third status description in the first status space, d) with a third mapping unit, such It is set up that the first status description can be mapped by a third mapping to a fourth status description in the second status space, e) and the third mapping unit
  • the mapping units are set up in such a way that the mapping of the first status description to the third status description, the mapping of the input variable to the associated output variable with a describe the specified accuracy.
  • the arrangement is particularly suitable for carrying out the method according to the invention or one of its further developments explained below.
  • the invention or any further development described below can also be implemented by a computer program product which has a storage medium on which a computer program which carries out the invention or further development is stored.
  • an imaging unit is implemented by a neuron layer composed of at least one neuron.
  • a state description is a vector of a predefinable dimension. Further training is preferably used to determine the dynamics of a dynamic process.
  • One embodiment has a measuring arrangement for detecting physical signals with which the dynamic process is described.
  • Further training is preferably used to determine the dynamics of a dynamic process which takes place in a technical system, in particular in a chemical reactor, or to determine the dynamics of an electrocardio gram, or to determine economic or macroeconomic dynamics.
  • Further training can also be used to monitor or control a dynamic process, in particular a chemical process.
  • the status descriptions can be determined from physical signals.
  • a further development is used in speech processing, the input variable being first speech information of a word to be spoken and / or a syllable to be spoken, and the output variable being second speech information of the word to be spoken and / or the syllable to be spoken.
  • the first speech information comprises a classification of the word to be spoken and / or the syllable to be spoken and / or pause information of the word to be spoken and / or the syllable to be spoken.
  • the second speech information includes accentuation information of the word to be spoken and / or the syllable to be spoken.
  • the first speech information comprises phonetic and / or structural information of the word to be spoken and / or the syllable to be spoken and / or the second speech information contains frequency information of the word to be spoken and / or the speaking syllable.
  • Embodiments of the invention are prepared in figures Darge ⁇ and are explained hereinafter.
  • FIG. 1 sketch of an arrangement according to a first embodiment (KRKNN);
  • FIGS. 2a and 2b show a first sketch of a general description of a dynamic system and a second sketch of a description of a dynamic system which is based on a “causal-retro-causal” relationship;
  • Figure 3 shows an arrangement according to a second embodiment (KRKFKNN);
  • FIG. 4 shows a sketch of a chemical reactor, from which quantities are measured, which are processed further with the arrangement according to the first exemplary embodiment
  • FIG. 5 shows a sketch of an arrangement of a TDRNN which is unfolded over time with a finite number of states
  • FIG. 6 shows a sketch of a traffic control system which is modeled with the arrangement in the context of a second exemplary embodiment
  • FIG 8 sketch of an alternative arrangement according to a second embodiment (KRKFKNN with loosened connections);
  • FIG. 9 sketch of an alternative arrangement according to a first exemplary embodiment (KRKNN).
  • FIG. 10 sketch of a speech processing using an arrangement according to a first exemplary embodiment (KRKNN);
  • FIG. 11 sketch of a speech processing using an arrangement according to a second exemplary embodiment (KRKFKNN).
  • FIG. 4 shows a chemical reactor 400 which is filled with a chemical substance 401.
  • the chemical reactor 400 comprises a stirrer 402 with which the chemical substance 401 is stirred. Further chemical substances 403 flowing into the chemical reactor 400 react for a predeterminable period in the chemical reactor 400 with the chemical substance 401 already contained in the chemical reactor 400. A substance 404 flowing out of the reactor 400 becomes from the chemical reactor 400 via an outlet derived.
  • the stirrer 402 is connected via a line to a control unit 405 with which a stirring frequency of the stirrer 402 can be set via a control signal 406.
  • a measuring device 407 is also provided, with which concentrations of chemical substances contained in chemical substance 401 are measured.
  • Measurement signals 408 are fed to a computer 409, in which
  • Computer 409 is digitized via an input / output interface 410 and an analog / digital converter 411 and stored in a memory 412.
  • a processor 413 like the memory 412, is connected to the analog / digital converter 411 via a bus 414.
  • the calculator 409 is also on the
  • Input / output interface 410 connected to the controller 405 of the stirrer 402 and thus the computer 409 controls the stirring frequency of the stirrer 402.
  • the computer 409 is also connected via the input / output interface 410 to a keyboard 415, a computer mouse 416 and a screen 417.
  • the chemical reactor 400 as a dynamic technical system 250 is therefore subject to a dynamic process.
  • the chemical reactor 400 is described by means of a status description.
  • an input variable ut of this state description is composed of an indication of the temperature prevailing in the chemical reactor 400, the pressure prevailing in the chemical reactor 400 and the stirring frequency set at time t.
  • the input variable ut is thus a three-dimensional vector.
  • the aim of the modeling of the chemical reactor 400 described in the following is to determine the dynamic development of the substance concentrations, in order to enable efficient generation of a predefinable target substance to be produced as the outflowing substance 404.
  • FIG. 2b Such a structure of a dynamic system with a “causal-retro-causal relationship” is shown in FIG. 2b.
  • the dynamic system 250 is subject to the influence of an external input variable u of a predeterminable dimension, an input variable ut at a time t being referred to as ut:
  • the input variable ut at a time t causes a change in the dynamic process taking place in the dynamic system 250.
  • an internal state of the system 250 at a time t which internal state cannot be observed by an observer of the system 250, is composed of a first inner partial state st and a second inner partial state rt-
  • f1 (.) denotes a general mapping rule
  • the first inner partial state st is influenced by an earlier first inner partial state st-i and the input variable ut. Such a relationship is usually referred to as “causality”.
  • f2 (.) denotes a general mapping rule
  • the second inner partial state rt is clearly influenced by a later second inner partial state rt + i, generally an expectation of a later state of the dynamic system 250, and the input variable ut.
  • a later second inner partial state rt + i generally an expectation of a later state of the dynamic system 250
  • the input variable ut is called “retro causality”.
  • An output variable yt observable by an observer of the dynamic system 250 at a time t thus depends on the input variable Ut.
  • the output variable yt (yt e 9? N ) is predeterminable dimension n.
  • the dependence of the output variable yt on the input variable ut, the first inner partial state st and the second inner partial state rt of the dynamic process is given by the following general rule:
  • g (.) denotes a general mapping rule
  • KRKNN ausal-retro-causal neural network
  • the connections between the neurons of the neural network are weighted.
  • the weights of the neural network are summarized in a parameter vector v.
  • the first inner partial state s and the second inner partial state rt depend on the input variable u in accordance with the following regulations.
  • NN denotes a mapping rule specified by the neural network.
  • the KRKNN 100 according to FIG. 1 is a neural network developed over four times, t-1, t, t + 1 and t + 2. The basics of a neural network unfolded over a finite number of times are described in [1].
  • FIG. 5 shows the known TDRNN as a neural network 500 that is deployed over a finite number of times.
  • the neural network 500 shown in FIG. 5 has an input layer 501 with three partial input layers 502, 503 and 504, each of which contains a predeterminable number of input computing elements, to which input variables ut have a predefinable time t, i.e. time series values described below can be applied.
  • Input computing elements i.e. Input neurons are connected via variable connections to neurons of a predefinable number of hidden layers 505.
  • Neurons of a first hidden layer 506 are connected to neurons of the first partial input layer 502. Furthermore, neurons of a second hidden layer 507 are connected to neurons of the second input layer 503. Neurons of a third hidden layer 508 are connected to neurons of the third partial input layer 504.
  • the connections between the first partial input layer 502 and the first hidden layer 506, the second partial input layer 503 and the second hidden layer 507 and the third partial input layer 504 and the third hidden layer 508 are in each case the same.
  • the weights of all connections are each contained in a first connection matrix B.
  • Neurons of a fourth hidden layer 509 are with their inputs with outputs of neurons of the first hidden layer 506 according to a through a second connection matrix A2 given given structure. Furthermore, outputs of the neurons of the fourth hidden layer 509 are connected to inputs of neurons of the second hidden layer 507 according to a structure given by a third connection matrix A ⁇ . '
  • a fifth neuron hidden layer 510 are connected to their inputs according to a given through the third connection A2 ⁇ matrix structure to outputs of neurons in the second hidden layer 507th Outputs of the neurons of the fifth hidden layer 510 are connected to inputs of neurons of the third hidden layer 508 according to a structure given by the third connection matrix A] _.
  • connection structure is equivalent to a sixth hidden layer 511, which are connected to outputs of the neurons of the third hidden layer 508 according to a structure given by the second connection matrix A2 and according to a structure given by the third connection matrix A] _ to neurons of a seventh hidden layer 512.
  • Neurons of an eighth hidden layer 513 are in turn given according to one given by the first connection matrix A2
  • An output layer 520 has three sub-output layers, a first sub-output layer 521, a second sub-output layer 522 and a third sub-output layer 523. Neurons of the first partial output layer 521 are according to one connected to neurons of the third hidden layer 508 by a structure given an output connection matrix C. Neurons of the second partial output layer are also connected to neurons of the eighth hidden layer 512 in accordance with the structure given by the output connection matrix C. Neurons of the third partial output layer 523 are connected to neurons of the ninth hidden layer 514 according to the output connection matrix C. At the neurons of the partial output layers 521, 522 and 523, the output variables can be tapped for a time t, t + 1, t + 2 (yt, Yt + 1 / Yt + 2 )
  • each layer or each sub-layer has a predeterminable number of neurons, i.e. Computing elements.
  • Sub-layers of a layer each represent a system state of the dynamic system described by the arrangement. Accordingly, sub-layers of a hidden layer each represent an “inner” system state.
  • connection matrices are of any dimension and each contain the weight values for the corresponding connections between the neurons of the respective layers.
  • the connections are directional and marked by arrows in FIG. 1.
  • An arrow direction indicates a “computing direction *, in particular an imaging direction or a transformation direction.
  • the arrangement shown in FIG. 1 has an input layer 100 with four partial input layers 101, 102, 103 and 104, each partial input layer 101, 102, 103, 104 each having time series values ut-i. ut, ut + i. t + 2 can be fed at a time t-1, t, t + 1 or t + 2.
  • the partial input layers 101, 102, 103, 104 of the input layer 100 are each connected via connections according to a first connection matrix A with neurons of a first hidden layer 110 to four partial layers 111, 112, 113 and 114 of the first hidden layer 110.
  • the partial input layers 101, 102, 103, 104 of the input layer 100 are additionally each connected via connections according to a second connection matrix B to neurons of a second hidden layer 120, each with four partial layers 121, 122, 123 and 124 of the second hidden layer 120.
  • the neurons of the first hidden layer 110 are each connected to neurons of an output layer 140, which in turn has four partial output layers 141, 142, 143 and 144, in accordance with a structure given by a third connection matrix C.
  • the neurons of the second hidden layer 120 are also connected to the neurons of the output layer 140 in accordance with a structure given by a fourth connection matrix D.
  • the sublayer 111 of the first hidden layer 110 is connected to the neurons of the sublayer 112 of the first hidden layer 110 via a connection according to a fifth connection matrix E.
  • Corresponding connections also have all other sub-layers 112, 113 and 113 of the first hidden layer 110.
  • all sub-layers 111, 112, 113 and 114 of the first hidden sub-layer 110 are connected to one another in accordance with their chronological sequence t-1, t, t + 1 and t + 2.
  • the sub-layers 121, 122, 123 and 124 of the second hidden layer 120 are connected to one another in opposite directions.
  • the sub-layer 124 of the second hidden layer 120 is via a connection according to a sixth
  • Connection matrix F connected to the neurons of the sub-layer 123 of the second hidden layer 120.
  • Corresponding connections also have all other sub-layers 123, 122 and 121 of the second hidden layer 120.
  • an “internal * system state s ⁇ , st + i or st + 2 of the sub-layer 112, 113 or 114 of the first hidden layer is formed in each case from the associated input state ut, ut + ⁇ or ut + 2 and the previous "inner * system state st-i, st or st-
  • an “internal * system state rt- ⁇ , rt or rt + i of the sub-layer 121, 122 or 123 of the second hidden layer 120 is formed in accordance with the connections described. det from the associated input state ut-i. ut or ut + i and the temporally following "inner * system state r t ' r t + l or -' r t + 2-
  • a state from the associated “inner * system state st-i. st. st + i and st + 2 a part ⁇ layer 111, 112, 113 and 114 of the first hidden layer 110 and from the associated inner "system state * ti rt rt rt + i + 2 or a partial layer 121, 122 , 123 and 124 of the second hidden layer 120 are formed.
  • T is a number of times taken into account.
  • the back propagation method is used as the training method.
  • the training data set is obtained from the chemical reactor 400 in the following manner.
  • Concentrations are measured at predetermined input variables with the measuring device 407 and fed to the computer 409, digitized there and stored in a memory as time series values xt grouped together with the corresponding input variables that correspond to the measured variables.
  • the weight values of the respective connection matrices are adjusted.
  • the adjustment is made in such a way that the KRKNN describes the dynamic system it simulates, in this case the chemical reactor, as precisely as possible.
  • the arrangement from FIG. 1 is trained using the training data set and the cost function E.
  • the arrangement from FIG. 1 trained according to the training method described above is used to control and monitor the chemical reactor 400.
  • the input variables ut-i. ut determines a predicted output variable yt + i. This is then fed as a control variable, possibly after a possible preparation, to control means 405 for controlling stirrer 402 and control device 430 for inflow control (cf. FIG. 4).
  • FIG. 3 shows a further development of the KRKNN shown in FIG. 1 and described in the context of the above statements.
  • KRKFKNN causal-retro-causal-error-correcting-neural network
  • the input variable ut is made up of information about a rental price, a housing offer, inflation and an unemployment rate, which information regarding a residential area to be examined is determined at the end of the year (December values).
  • the input large a four-dimensional vector.
  • a time series of the input variables, which consist of several chronologically successive vectors, has time steps of one year each.
  • the aim of modeling co-pricing described below is to forecast a future rental price.
  • the KRKFKNN has a second input layer 150 with four partial input layers 151, 152, 153 and 154, each partial input layer 151, 152, 153, 154 each having time series values y? _ ⁇ .
  • Y ⁇ r Y + Xi 'Yt +? can be fed at a time t-1, t, t + 1 or t + 2, respectively.
  • the time series values y + -_ ⁇ 'Yt' Y + +1 'Y ++ 2 s ⁇ nc thereby output values measured on the dynamic system.
  • the partial input layers 151, 152, 153, 154 of the input layer 150 are each connected to neurons of the output layer 140 via connections according to a seventh connection matrix, which is a negative identity matrix.
  • the procedure for training the arrangement described above corresponds to the procedure for training the arrangement according to the first exemplary embodiment.
  • 3rd embodiment traffic modeling and traffic jam warning forecast
  • a third exemplary embodiment described below describes traffic modeling and is used for a traffic jam forecast.
  • the arrangement according to the first exemplary embodiment is used (cf. FIG. 1).
  • the third exemplary embodiment differs from the first exemplary embodiment and also from the second exemplary embodiment in that in this case the variable t originally used as a time variable is used as a location variable t.
  • An original description of a state at time t thus describes a state at a first location t in the third exemplary embodiment. The same applies in each case to a description of the state at a time t-1 or t + 1 or t + 2.
  • locations t-1, t, t + 1 and t + 2 are arranged in succession along a route in a predetermined direction of travel.
  • FIG. 6 shows a street 600 which is used by cars 601, 602, 603, 604, 605 and 606.
  • Conductor loops 610, 611 integrated in the street 600 receive electrical signals in a known manner and feed the electrical signals 615, 616 to a computer 620 via an input / output interface 621.
  • the electrical signals are digitized in a time series and in a memory 623, which is connected via a bus 624 with the analog / digital converter 622 and a processor
  • a traffic control system 650 is supplied with control signals 951, from which a predetermined speed specification 652 can be set in the traffic control system 650 or further information from traffic regulations which is transmitted to the drivers of the vehicles 601, 602, 603 via the traffic control system 650. 604, 605 and 606.
  • the local state variables are measured as described above using the conductor loops 610, 611.
  • variables (v (t), p (t), q (t)) thus represent a state of the technical system "traffic" at a specific point in time t.
  • These variables are used to evaluate r (t) of a current one State, for example with regard to traffic flow and homogeneity. This assessment can be quantitative or qualitative.
  • the traffic dynamics are modeled in two phases:
  • Control signals 651 are formed from forecast variables ascertained in the application phase and are used to indicate which speed limitation is to be selected for a future period (t + 1).
  • the arrangement described in the first exemplary embodiment can also be used to determine the dynamics of an electrocardio gram (EKG). This enables indicators that indicate an increased risk of heart attack to be determined at an early stage. A time series from ECG values measured on a patient is used as the input variable.
  • EKG electrocardio gram
  • the arrangement according to the first exemplary embodiment is used for traffic modeling according to the third exemplary embodiment.
  • variable t originally used as a time variable (in the first exemplary embodiment) is used as a location variable t as described in the context of the third exemplary embodiment.
  • the arrangement according to the first exemplary embodiment is used in the context of speech processing (FIG. 10).
  • the basics of such language processing are known from [3].
  • the arrangement (KRKNN) 1000 is used to determine an accentuation in a sentence 1010 to be accentuated.
  • sentence 1010 to be accentuated is broken down into its words 1011 and these are each classified 1012 (part-of-speech tagging).
  • the classifications 1012 are coded 1013 in each case.
  • Each code 1013 is expanded by a pause information 1014 (phrase break information) which in each case indicates whether a pause is made after the respective word when the sentence 1010 to be accented is said.
  • a time series 1016 is formed from the extended codes 1015 of the sentence in such a way that a chronological sequence of states of the time series corresponds to the sequence of words in the sentence 1010 to be accentuated. This time series 1016 is applied to the arrangement 1000.
  • the arrangement now determines for each word 1011 an accentuation information 1020 (HA: main accent or strongly accented; NA: in addition to accent or slightly accentuated; KA: no accent or not accentuated), which indicates whether the respective word is spoken accented becomes.
  • HA main accent or strongly accented
  • NA in addition to accent or slightly accentuated
  • KA no accent or not accentuated
  • the arrangement described in the second exemplary embodiment can also be used to forecast macroeconomic dynamics, such as, for example, an exchange rate trend, or other economic indicators, such as, for example, a stock exchange price.
  • macroeconomic dynamics such as, for example, an exchange rate trend, or other economic indicators, such as, for example, a stock exchange price.
  • an input variable is formed from time series of relevant macroeconomic or economic indicators, such as interest rates, currencies or inflation rates.
  • the arrangement according to the second exemplary embodiment is used in the context of speech processing (FIG. 11). The basics of such language processing are known from [5], [6], [7] and [8].
  • the arrangement (KRKFKNN) 1100 is used to model a frequency curve of a syllable of a word in a sentence.
  • the sentence 1110 to be modeled is broken down into syllables 1111.
  • a state vector 1112 is determined, which describes the syllable phonetically and structurally.
  • Such a state vector 1112 comprises timing information 1113, phonetic information 1114, syntax information 1115 and emphasis information 1116.
  • a time series 1117 is formed from the state vectors 1112 of the syllables 1111 of the sentence 1110 to be modeled such that a chronological sequence of states of the time series 1117 corresponds to the sequence of the syllables 1111 in the sentence 1110 to be modeled. This time series 1117 is applied to the arrangement 1100.
  • the arrangement 1100 now determines for each syllable 1111 a parameter vector 1122 with parameters 1120, fomaxpos, fomaxalpha, lp, rp, which describe the frequency response 1121 of the respective syllable 1111.
  • parameters 1120 and the description of a frequency response 1121 by these parameters 1120 are known from [5], [6], [7] and [8].
  • the embodiment contributes to the second embodiment gel ⁇ th accordingly.
  • FIG. 7 shows a structural alternative to the arrangement from FIG. 1 according to the first exemplary embodiment.
  • connections 701, 702, 703, 704, 705, 706, 707 and 708 are disconnected or interrupted in the alternative arrangement according to FIG.
  • FIG. 8 shows a structural alternative to the arrangement from FIG. 3 according to the second exemplary embodiment.
  • FIG. 3 Components from FIG. 3 are shown with the same reference numerals in FIG. 8 with the same configuration.
  • FIG. 9 A further structural alternative to the arrangement according to the first exemplary embodiment is shown in FIG. 9.
  • the arrangement according to FIG. 9 is a KRKNN with a fixed point recurrence.
  • additional connections 901, 902, 903 and 904 are closed in the alternative arrangement according to FIG.
  • the additional connections 901, 902, 903 and 904 each have a connection matrix GT with weights.
  • This alternative arrangement can be used both in a training phase and in an application phase.
  • KRKNN a possible implementation of a KRKNN is specified for the SENN program, version 3.1.
  • the implementation comprises various sections, each of which contains a program code that is required for processing in SENN, version 3.1.
  • INPUT scalef (dmdol - dmdol (-1) TRAINING FROM MIN TO 12/31/1994 / dmdol (-1)) LAG -1
  • INPUT scalef US6 US6 (-1)) LAG -1
  • INPUT scalef (GER1 GERl (-l)
  • JAP1 FILE DATA / ap.txt
  • COLUMN 2 // JAP INDUSTRIAL PRODUCTION COLUMN 1 // DM / USDOLLAR
  • INPUT scale ((dmdol - dmdol (-1) COLUMN 6 // US ANNUAL INFLATION / dmdol (-1))
  • GER1 FILE DATA / ger.txt
  • INPUT scale ((US2 - US2 (-1) COLUMN 1 // GER DAX INDEX / US2 (-1))
  • GER2 FILE DATA / ger.txt
  • INPUT scale (U ⁇ 6 - US6 (-1) 115 COLUMN 2 // GER INDUSTRIAL PRODUCTION
  • INPUT scale ((GER1 - GERl (-l) COLUMN ⁇ 7 // GER ANNUAL INFLATION / GERK-1))
  • JAP1 FILE DATA / j ap. txt
  • INPUT scale (GER7 - GER7 (-1) COLUMN 2 // JAP INDUSTRIAL PRODUCTION
  • JAP5 FILE DATA / j ap. txt
  • INPUT scalef (JAP1 - JAP1 (-1) COLUMN 5 // JAP ANNUAL INFLATION / JAPl (-l))
  • INPUT scalef (GER1 - GERl (-l))
  • INPUT scalef GER7 - GER7 (-1) dmdol FILE DATA / inter.txt) LAG -2
  • JAP1 FILE DATA / jap.txt BEGIN
  • JAP2 FILE DATA / jap.txt COLUMN 1 // DM / USDOLLAR
  • JAP5 FILE DATA / jap.txt COLUMN 2 // US INDUSTRIAL PRODUCTION
  • INPUT _ scalef (dmdol - dmdol (-1))
  • GER1 FILE DATA / ger.txt
  • INPUT scalef (US2 - US2 (-1))
  • GER2 FILE DATA / ger.txt
  • INPUT scalef US6 - US6 (-1)
  • GER7 FILE DATA / ger.txt
  • INPUT scale ((GER2 - GER2 (-1))
  • JAP2 FILE DATA / jap.txt
  • INPUT scalef GER7 - GER7 (-1)
  • JAP5 FILE DATA / jap.txt
  • INPUT scalef (JAP1 - JAPl (-l))
  • INPUT scale ((JAP2 - JAP2 (-1)) dmdol (-1)) LAG -5
  • INPUT scale (JAP5 - JAP5 (-1) 110 US2 (-1)) LAG -5
  • JAP1 FILE DATA / jap.txt 130
  • INPUT scalef (dmdol - dmdol (-1)) COLUMN 3 / dmdol (-1))
  • LAG -4 rex4 FILE DATA / return.
  • INPUT scale (US6 - U ⁇ 6 (-1) COLUMN 5)
  • LAG -4 rex6 FILE DATA / return.
  • INPUT scalef (GER1 - GERl (-l)) COLUMN 6 / GERlf-1))
  • LAG -4 rex7 FILE DATA / return.
  • INPUT scalef (GER2 - GER2 (-1)) 145 COLUMN 7 / GER2 (-1))
  • LAG -4 rex ⁇ FILE DATA / return.
  • txt COLUMN 8 rex9 FILE DATA / return.
  • COLUMN 9 COLUMN 9 rexlO FILE DATA / return.
  • txt rexlO FILE DATA / return.
  • INPUT 1 * rex2 (0) / 10 LAG -1
  • INPUT 1 * rex4 (0) / 10 LAG -1
  • INPUT 1 * rex6 (0) / 10 85 LAG -1
  • INPUT 1 rex8 (0) / 10 LAG -1
  • INPUT 1 * rexlOfO) / I 10 LAG -1
  • INPUT CLUSTER mlp.mputO LAG -1 INPUT -1 * rex8 (0) / 10
  • COLUMN 6 COLUMN 1 rex7 FILE DATA / return.
  • txt rex2 FILE DATA / return.
  • txt 110 rex3 FILE DATA / return.
  • COLUMN 8 COLUMN 3 rex9 FILE DATA / return.
  • txt rex4 FILE DATA / return.
  • INPUT -1 * rexl (O) / 10 COLUMN 6
  • INPUT -1 * rex3 (0) / 10 COLUMN 7
  • INPUT -1 rex5 (0) / 10 COLUMN 8
  • INPUT -1 * rex7 (0) / 10 COLUMN 9
  • INPUT -1 * rex9 (0) / 10 125 COLUMN 10
  • INPUT -1 * rexlO (O) / 10
  • INPUT -1 * rex2 (0) / 10
  • INPUT -1 * rex3 (0) / 10
  • LAG -2 rexl FILE DATA / return.
  • txt INPUT -1 * rex8 (0) / 10 COLUMN 5
  • LAG -2 rex6 FILE DATA / return.
  • INPUT -1 * rex3 (0) / 10
  • LAG -4 rexl FILE DATA / return.
  • txt INPUT -1 rex4 (0) / 10 COLUMN 1 80
  • LAG -4 rex2 FILE DATA / return.
  • txt INPUT -1 * rex5 (0) / 10 COLUMN 2
  • LAG -4 rex3 FILE DATA / return.
  • txt INPUT -1 * rex6 (0) / 10 COLUMN 3
  • LAG -4 rex4 FILE DATA / return.
  • txt 85 INPUT -1 rex7 (0) / 10 COLUMN 4
  • LAG -4 rex5 FILE DATA / return.
  • txt INPUT -1 * rex8 (0) / 10 COLUMN 5
  • LAG -4 rex6 FILE DATA / return.
  • LAG -4 rex7 FILE DATA / return.
  • INPUT -1 * rexl (0) / 10 100 COLUMN 1
  • LAG -3 rex2 FILE DATA / return. txt
  • INPUT -1 * rexlO (O) / 10 COLUMN 10 LAG -3
  • INPUT -1 * rex2 (0) / 10
  • INPUT -1 * rex3 (0) / 10
  • COLUMN 5 135 LAG -5 rex ⁇ FILE DATA / return.
  • txt 140 END COLUMN 8 rex9 FILE DATA / return.
  • txt COLUMN 9 INPUT CLUSTER mlp. mput ⁇ rexlO FILE DATA / return.
  • LAG -4 rex2 FILE DATA / return. txt 75 TARGET
  • COLUMN 3 rex4 FILE DATA / return. txt TARGET CLUSTER mlp.past3
  • INPUT -1 * rexlO (O) / 10 END LAG -6
  • TARGET 0 TARGET rex8 (l) / 10
  • BEGIN BEGIN fmal2 rexl FILE DATEN / rendite.
  • txt rexl FILE DATA / return.
  • txt COLUMN 1 COLUMN 1 rei ⁇ 2 FILE DATA / return.
  • txt 100 rex2 FILE DATA / return.
  • txt COLUMN 2 COLUMN 2 rex3 FILE DATA / return.
  • txt rex3 FILE DATA / return.
  • txt rex3 FILE DATA / return.
  • txt COLUMN 3 COLUMN 3 rex4 FILE DATA / return.
  • txt rex4 FILE DATA / return.
  • txt COLUMN 4 105 COLUMN 4 rex5 FILE DATA / return.
  • txt rex5 FILE DATA / return.
  • txt COLUMN 5 COLUMN 5 rex ⁇ FILE DATA / endite.
  • txt rex6 FILE DATA / return.
  • txt COLUMN 6 COLUMN 6 rex7 FILE DATA / return.
  • txt 110 rex7 FILE DATA / return.
  • txt COLUMN 7 COLUMN 7 rex8 FILE DATA / return.
  • txt rex ⁇ FILE DATA / return .txt COLUMN 8
  • COLUMN rex9 FILE DATA / return.
  • txt rex9 FILE DATA / return.
  • txt COLUMN 9 115 COLUMN 9 rexlO FILE DATA / return.
  • txt rexlO FILE DATA / return.
  • COLUMN 1 COLUMN 1 rex2 FILE DATA / return.
  • txt rex2 FILE DATA / return.
  • COLUMN 2 COLUMN 2 rex3 FILE DATA / return.
  • txt rex3 FILE DATA / return.
  • COLUMN 3 140 COLUMN 3 rex4 FILE DATA / return. txt rex4 FILE DATA / return. txt
  • COLUMN 5 COLUMN 5 rex6 FILE DATA / return.
  • txt 145 rex6 FILE DATA / return.
  • BEGIN final4 BEGIN final6 rexl FILE DATA / return.
  • txt rexl FILE DATA / return.
  • txt COLUMN 1 COLUMN 1 rex2 FILE DATA / return.
  • xt 100 rex2 FILE DATA / return.
  • COLUMN 2 COLUMN 2 rex3 FILE DATA / return.
  • txt rex3 FILE DATA / return.
  • COLUMN 3 COLUMN 3 rex4 FILE DATA / return.
  • txt rex4 FILE DATA / return.
  • COLUMN 4 105 COLUMN 4 rex5 FILE DATA / return.
  • txt rex5 FILE DATA / return.
  • COLUMN 5 COLUMN 5 rex ⁇ FILE DATA / return.
  • txt rex ⁇ FILE DATA / return.
  • txt COLUMN 6 COLUMN 6 rex7 FILE DATA / return.
  • txt 110 rex7 FILE DATA / return.
  • COLUMN 7 COLUMN 7 rex8 FILE DATA / return.
  • txt rex8 FILE DATA / return.
  • COLUMN 8 COLUMN 8 rex9 FILE DATA / return.
  • txt rex9 FILE DATA / return.
  • TARGET rexl (4) / 10
  • TARGET rexl (6) / 10
  • TARGET rex5 (4) / 10
  • TARGET rex5 (6) / 10
  • TARGET rex ⁇ (4) / 10
  • TARGET rex6 (6) / 10
  • TARGET rex8 (4) / 10
  • TARGET rex8 (6) / 10
  • COLUMN 3 140 wO ⁇ 1 ⁇ rex4 FILE DATA / return. txt DeltaLambda ⁇ le-06 ⁇
  • StopControl paramet ⁇ calEntropy ⁇ 105 EpochLimit ⁇ Parameter ⁇ le-06 ⁇ Active ⁇ T ⁇
  • PatternSelection ⁇ be Sequential MomentumBackProp ⁇ ExpRandom ⁇ Alpha ⁇ 0.05 ⁇
  • PruningSet ⁇ Train. + Val ⁇ d. MaxSteps ⁇ 10 ⁇
  • PatternSelection ⁇ 145 File ⁇ ob] Func ⁇ Let Sequential ⁇ ExpRandom ⁇ SearchControl ⁇ Lambda ⁇ 2 ⁇ SearchStrategy ⁇ be HillClimberControl 75 HillCli berControl ⁇ % In ⁇ t ⁇ alAl ⁇ ve ⁇ 0.95 ⁇ InputModification ⁇ InheritWeights ⁇ T ⁇ be None Beta ⁇ 0.1 ⁇ AdaptiveUnifor Noise ⁇ MutationType ⁇ DistributedMac- 80 NoiseEta ⁇ 1 ⁇ roMutation DampmgFactor ⁇ 1 ⁇
  • LipComplexity ⁇ 0 ⁇ be OptComplexity ⁇ 2 ⁇ 115 plogistic ⁇ testVal (dead) -testVal (al ⁇ ve) ⁇ 0 parameter 0.5 ⁇
  • InputModification ⁇ be None 100 SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ Filename ⁇ mputManip. dat ⁇ NoiseEta ⁇ 1 ⁇ DampmgFactor ⁇ 1 ⁇ LoadMampulatorData ⁇
  • InputModification be None SaveManipulatorData ⁇ Filename ⁇ mputMamp. dat ⁇ 75 AdaptiveGaussNoise ⁇ NoiseEta ⁇ 1 ⁇ LoadMampulatorData ⁇ DampmgFactor ⁇ 1 ⁇
  • InputModification ⁇ 95 ⁇ be None SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ Filename ⁇ mputManip.dat ⁇ NoiseEta ⁇ 1 ⁇ DampmgFactor ⁇ 1 ⁇ LoadMampulatorData ⁇
  • InputModification be None SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ 145 Filename ⁇ mputManip.dat ⁇ NoiseEta ⁇ 1 ⁇ DampmgFactor ⁇ 1 ⁇ LoadMampulatorData ⁇
  • InputModification be None SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ Filename ⁇ inputMamp. dat ⁇ NoiseEta ⁇ 1 ⁇ DampmgFactor ⁇ 1 ⁇ 95 LoadMampulatorData ⁇ 1 Filename ⁇ mputMamp. dat ⁇
  • InputModification be None 140 SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ Filename ⁇ inputMamp.dat ⁇ NoiseEta ⁇ 1 ⁇ DampmgFactor ⁇ 1 ⁇ LoadMampulatorData ⁇ Filename ⁇ inputMamp.dat ⁇
  • InputModification be None SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ Filename ⁇ inputMamp. dat ⁇ NoiseEta ⁇ 1 ⁇ 90 ⁇ DampmgFactor ⁇ 1 ⁇ LoadMampulatorData ⁇
  • InputModification ⁇ 135 ⁇ be None SaveManipulatorData ⁇ AdaptiveUniformNoise ⁇ Filename ⁇ inputMamp.dat ⁇ NoiseEta ⁇ 1 ⁇ DampmgFactor ⁇ 1 ⁇ LoadMampulatorData ⁇ 140 Filename ⁇ inputMamp.dat ⁇
  • ErrorFunc ⁇ be LnCosh Norm ⁇ NoNorm ⁇ Ixl ⁇

Abstract

L'invention concerne une représentation assistée par ordinateur de plusieurs descriptions d'état changeant dans le temps. Selon l'invention, une première description d'état dans un premier espace d'état est illustrée par une représentation sur une deuxième description d'état dans un deuxième espace d'état, la deuxième description d'état d'un état ultérieur étant prise en considération. Une représentation suivante implique que la deuxième description d'état dans le deuxième espace d'état est à nouveau illustrée sur une troisième description d'état dans le premier espace d'état.
EP01978185A 2000-09-29 2001-09-28 Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble Withdrawn EP1384198A2 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE10048468 2000-09-29
DE10048468 2000-09-29
PCT/DE2001/003731 WO2002027654A2 (fr) 2000-09-29 2001-09-28 Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble

Publications (1)

Publication Number Publication Date
EP1384198A2 true EP1384198A2 (fr) 2004-01-28

Family

ID=7658206

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01978185A Withdrawn EP1384198A2 (fr) 2000-09-29 2001-09-28 Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble

Country Status (4)

Country Link
US (1) US20040030663A1 (fr)
EP (1) EP1384198A2 (fr)
JP (1) JP2004523813A (fr)
WO (1) WO2002027654A2 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8463441B2 (en) 2002-12-09 2013-06-11 Hudson Technologies, Inc. Method and apparatus for optimizing refrigeration systems
DE10324045B3 (de) 2003-05-27 2004-10-14 Siemens Ag Verfahren sowie Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemverhaltens eines dynamischen Systems
WO2005081076A2 (fr) * 2004-02-24 2005-09-01 Siemens Aktiengesellschaft Procede, programme informatique avec systemes de code de programme, et produit de programme informatique pour prevoir l'etat d'une chambre de combustion par utilisation d'un reseau neuronal recurrent
DE102004059684B3 (de) * 2004-12-10 2006-02-09 Siemens Ag Verfahren und Anordnung sowie Computerprogramm mit Programmmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemzustandes eines dynamischen Systems

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3444067A1 (de) * 1984-12-03 1986-11-13 Wilhelm Dipl.-Ing.(TH) 3392 Clausthal-Zellerfeld Caesar Verfahren und einrichtung zur erzielung eines neuartigen rueckversetzungs- und repitiereffekts
US4901004A (en) * 1988-12-09 1990-02-13 King Fred N Apparatus and method for mapping the connectivity of communications systems with multiple communications paths
DE4215179A1 (de) * 1991-05-08 1992-11-12 Caterpillar Inc Prozessor und verarbeitendes element zum gebrauch in einem neural- oder nervennetzwerk
US5416899A (en) * 1992-01-13 1995-05-16 Massachusetts Institute Of Technology Memory based method and apparatus for computer graphics
EP0582885A3 (en) * 1992-08-05 1997-07-02 Siemens Ag Procedure to classify field patterns
DE4328896A1 (de) * 1992-08-28 1995-03-02 Siemens Ag Verfahren zum Entwurf eines neuronalen Netzes
TW284866B (fr) * 1994-07-08 1996-09-01 Philips Electronics Nv
EP1074900B1 (fr) * 1999-08-02 2006-10-11 Siemens Schweiz AG Dispositif prédictif pour la commande ou la régulation des variables d alimentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0227654A2 *

Also Published As

Publication number Publication date
WO2002027654A2 (fr) 2002-04-04
JP2004523813A (ja) 2004-08-05
US20040030663A1 (en) 2004-02-12
WO2002027654A3 (fr) 2003-11-27

Similar Documents

Publication Publication Date Title
EP2649567B1 (fr) Procédée pour la modélisation sur ordinateur d'un système technique
EP2135140B1 (fr) Procédé de commande et/ou de réglage assisté par ordinateur d'un système technique
EP2097793B1 (fr) Procédé de commande et/ou de régulation d'un système technique assistés par ordinateur
AT511577B1 (de) Maschinell umgesetztes verfahren zum erhalten von daten aus einem nicht linearen dynamischen echtsystem während eines testlaufs
EP2112568A2 (fr) Procédé de commande et/ou réglage assistées par ordinateur d'un système technique
EP1145192B1 (fr) Configuration d'elements informatiques interconnectes, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique et procede d'apprentissage assiste par ordinateur d'une configuration d'elements informatiques interconnectes
EP1021793A2 (fr) Ensemble d'elements de calcul relies les uns aux autres, procede de determination assistee par ordinateur d'une dynamique a la base d'un processus dynamique, et procede d'entrainement assiste par ordinateur pour un ensemble d'elements de calcul relies les uns aux autres
DE112020006021T5 (de) Auf maschinelles lernen basierendes verfahren und vorrichtung für die berechnung und verifizierung von verzögerungen des entwurfs integrierter schaltungen
DE202018102632U1 (de) Vorrichtung zum Erstellen einer Modellfunktion für ein physikalisches System
DE112020003050T5 (de) Fehlerkompensation in analogen neuronalen netzen
WO2002027654A2 (fr) Procede et ensemble pour la representation assistee par ordinateur de plusieurs descriptions d'etat changeant dans le temps, et procede d'apprentissage d'un tel ensemble
DE10324045B3 (de) Verfahren sowie Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemverhaltens eines dynamischen Systems
WO2003025851A2 (fr) Procede et dispositif de determination d'un premier etat courant d'une premiere suite temporelle de premiers etats respectifs d'un systeme a variation dynamique
EP1145190B1 (fr) Ensemble de plusieurs elements de calcul relies entre eux, procede de determination assistee par ordinateur d'une dynamique se trouvant a la base d'un processus dynamique et procede pour l'entrainement assiste par ordinateur d'un ensemble d'elements de calcul relies entre eux
WO2022058140A1 (fr) Commande d'un système technique au moyen d'une unité de calcul pour intelligence artificielle
DE102008014126B4 (de) Verfahren zum rechnergestützten Lernen eines rekurrenten neuronalen Netzes
DE10150914C1 (de) Verfahren zur strukturellen Analyse und Korrektur eines mittels einer Computersprache beschriebenen Differentialgleichungssystems
EP1093639A2 (fr) Reseau neuronal, et procede et dispositif pour l'entrainement d'un reseau neuronal
DE10047172C1 (de) Verfahren zur Sprachverarbeitung
EP1194890B1 (fr) Dispositif, procede, produit comportant un programme d'ordinateur et support de stockage lisible par ordinateur pour la compensation assistee par ordinateur d'un etat de desequilibre d'un systeme technique
EP1114398B1 (fr) Procede pour entrainer un reseau neuronal, procede de classification d'une sequence de grandeurs d'entree au moyen d'un reseau neuronal, reseau neuronal et dispositif pour l'entrainement d'un reseau neuronal
DE10325513A1 (de) Verfahren und Vorrichtung zum Erstellen eines Modells einer Schaltung zur formalen Verifikation
EP1483634B1 (fr) Procede de simulation d'un systeme technique et simulateur associe
WO2003079285A2 (fr) Procede et systeme ainsi que programme informatique dote de moyens de code de programme et produit de programme informatique servant a ponderer des grandeurs d'entree pour une structure neuronale, ainsi que structure neuronale associee
EP1190383B1 (fr) Procede de determination assistee par ordinateur de l'appartenance d'une grandeur d'entree donnee a un groupe

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030314

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB IT SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050331