EP2543006A1 - Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique - Google Patents
Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamiqueInfo
- Publication number
- EP2543006A1 EP2543006A1 EP11714531A EP11714531A EP2543006A1 EP 2543006 A1 EP2543006 A1 EP 2543006A1 EP 11714531 A EP11714531 A EP 11714531A EP 11714531 A EP11714531 A EP 11714531A EP 2543006 A1 EP2543006 A1 EP 2543006A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- network
- vector
- causal
- recurrent neural
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 54
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000000306 recurrent effect Effects 0.000 title claims abstract description 50
- 239000013598 vector Substances 0.000 claims abstract description 90
- 230000001364 causal effect Effects 0.000 claims abstract description 36
- 238000011161 development Methods 0.000 claims abstract description 13
- 230000002123 temporal effect Effects 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 21
- 230000004913 activation Effects 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 150000001768 cations Chemical class 0.000 claims 1
- 239000002994 raw material Substances 0.000 abstract description 5
- 238000009434 installation Methods 0.000 abstract 1
- 230000018109 developmental process Effects 0.000 description 11
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 241001212789 Dynamis Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
Definitions
- the invention relates to a method for computer-aided learning of a recurrent neural network for modeling a dynamic system and to a method for predicting the observables of a dynamic system based on a learned recurrent neural network and a corresponding computer program product.
- Recurrent neural networks are used today in various fields of application to in order to model the temporal development of a dynamic system in a suitable manner such that a learned with training data of the dynamic system recurrent neural network well the Obser ⁇ vablen (observable states) of the system in can predict.
- unknown unknown states are modeled by the recurrent neural network as states of the dynamic system.
- a causal, ie temporally forward, flow of information between temporally successive states is considered.
- Such dynamic systems are often insufficiently described by known recurrent neural networks.
- the object of the invention is therefore to provide a method for computer-aided learning of a recurrent neural network, with which dynamic systems can be better modeled.
- the inventive method is used for computer-assisted learning a recurrent neural network for modeling a dynamic system at respective times comprising one or more Obwalden servablen by a Observablenvektor (ie observable states of the dynamic Sys tems ⁇ ) is characterized as entries.
- the procedure is as applicable to ⁇ to any dynamic systems beispielswei ⁇ se the development of energy prices and / or commodity prices can be modeled with the procedure.
- the method can be used to model any technical system which changes dynamically over time based on corresponding observable state variables of the technical system, in order thereby to predict observables of the technical system with a correspondingly learned network.
- a gas turbine for example, a gas turbine
- the recurrent neural network used in the method according to the invention comprises a first subnet in the form of a causal network describing a temporally forward information flow between first state vectors of the dynamic system, wherein a first state vector comprises one or more first entries at a respective time each associated with an entry of the observable vector, and one or more hidden (ie unobservable) states of the dynamic system.
- a second sub-network in the form of a retro-causal network is further provided, wherein the retro-causal network a time ⁇ Lich backward flow of information between second state vectors of the dynamic describes system wherein a second state vector comprises at a respective time egg NEN or more second entries, each entry of the egg ⁇ nem Observablenvektors belongs to, and ei ⁇ NEN or more hidden states of the dynamic system.
- a respective determined that time of Observablenvektor such that the first entries of the first state vector to the two ⁇ ten entries of the second state vector are combined.
- the causal and retro-causal networks are learned based on training data containing a sequence of temporally successive known observable vectors.
- the inventive method is characterized in that a dynamic system is described by a recurrent neural network, which is both a flow of information from the past to the future as well as a flow of information from the future to the past into account ⁇ Untitled.
- dynamic systems can be suitably modeled in which the observables are also influenced by predicted future observable values at a given time.
- the first and second entries of the first and second state vectors are made using the difference between the one in the recurrent neural network corrected observable vector and the known observable vector corrected at the respective time.
- the first and second state vectors with the corrected first and second entries are then reused in learning. In this way, a so-called teacher forcing is achieved at a particular time, in which observables determined in the recurrent neural network are always adapted to observables according to the training data.
- causal and retro-causal networks are based on a
- a target value is determined which determines the difference vector between the observable vector determined in the recurrent neural network and the known one Observable vector represents at the respective time.
- learning is as optimization goal to minimize the sum of the amounts or squared values of the difference ⁇ vectors at the respective time points for which there exists a known Observablenvektor from the training data set. This ensures in a simple manner that the recurrent neural network correctly models the dynamics of the system under consideration.
- a first state vector is converted at a respective time into a first state vector at a subsequent point in time by multiplication with a matrix assigned to the causal network and the application of an activation function.
- the activation function is first applied to the state vector at the respective time, and only then is a multiplication carried out with the matrix assigned to the causal network. This ensures that observables can be described which are not limited by the value range of the activation function.
- a second state vector at a respective point in time is converted into a second state vector at a preceding time by multiplication with a matrix assigned to the retro-causal network and applying transferred to an activation function.
- the activation function is again first applied to the second state vector at the respective time, and only then is a multiplication carried out with the matrix assigned to the retro-causal network.
- the above ⁇ be signed activation functions are tanh functions (hyperbolic tangent), which are often used in recurrent neural networks.
- the invention comprises a method for predicting observables of a dynamic see system, in which the prediction is performed with a recurrent neural network, which is learned with the inventive learning ⁇ training method based on training data comprehensively known observable vectors of the dynamic system ,
- the invention further relates to a computer program product having a program code stored on a machine-readable carrier for carrying out the methods described above when the program is run on a computer.
- FIGS. 1 and 2 show two variants of known recurrent neural networks for modeling a dynamic system
- FIG. 3 shows a variant of a recurrent neural network based on FIG. 2, which according to the invention is used as a causal subnetwork;
- Fig. 4 is a known from the prior art
- FIGS. 5 and 6 show variants of the learning of the causal network of FIG. 3, which are used in embodiments of the method according to the invention.
- FIG. 7 shows a retro-causal network which is used in the method according to the invention in combination with the causal network of FIG. 3;
- FIG. 10 is an embodiment of a erfindungsge ⁇ MAESSEN recurrent neural network which combines the networks of Figures 3 and 7 with each other ⁇ ..;
- Recurrent neural networks for modeling the temporal behavior of a dynamic system are well known in the art. These networks typically comprise a plurality of layers which include a plurality of neurons and may be based on training data from known states of the dynamic system learned in a suitable manner such that future states of the dynamic Sys tems ⁇ can be predicted.
- Fig. 1 shows a known prior art variant of the neural network that models an open dynamic Sys tem ⁇ .
- the network in this case comprises an input layer I with temporally consecutive state vectors u t -3, Ut-2, ut-i and u t representing corresponding inputs of Dynami ⁇ 's system.
- These input variables can be, for example, manipulated variables of a technical system modeled using the neural network.
- the individual state vectors of the input layer I are connected via matrices B to corresponding hidden state vectors s t -2 r St-i, etc. of a hidden layer.
- the hidden state vectors comprise a plurality of hidden states of the dynamic system and form the (unobservable) state space of the dynamic system.
- the individual hidden state vectors are connected to each other via matrices A.
- the network further includes an output layer having output 0 ⁇ sizes in the form of state vectors y t -2 / Yt-ir ⁇ / Yt + 4r which to corresponding hidden state vectors s t r -2 s t i, s t +4 are coupled via the matrix C.
- the states of the output layer are states of the dynamic Sys tems ⁇ which result from the corresponding input values of the input layer I. Based on training data, wel ⁇ che from known input variables and from this there are resulting known output variables, the neural network of Figure 1 may be learned by known methods, such as error-back propagation and then be used to do so.
- the network of FIG. 1 is based on a modeling of the considered dynamic system in the form of a superposition of an autonomous and an externally driven subsystem.
- Fig. 2 shows a further variant of a recurrent neurona ⁇ len network, which is used in the described below embodiments of the method according to the invention is used.
- This network models a closed dynamic system and differs from the network of FIG. 1 in that it no longer distinguishes between inputs u T and outputs y T , where ⁇ in the following designates an arbitrary point in time. Rather, both the input quantities and the output quantities are regarded as observables, ie observable states of an observable vector of the dynamic system.
- the network of FIG. 2 comprises a first layer LI and a second layer L2, wherein the first
- Layer LI represents a temporally forward information flow between individual state vectors s t -2 r s t -i, s t +3 of the modeled dynamic system.
- a state vector s.sub.T initially contains as entries the observable observables which correspond to the state vectors y.sub.t and u.sub.t of FIG. 1, and then the unobservable hidden states, where the number in hidden states is usually much larger than the number of observables.
- the individual state vectors in layer LI are interleaved by arrays A, which are learned appropriately based on training data. At the beginning of the learning there is a suitable bias in the
- Layer LI predetermined which is designated in Fig. 2 and in all subsequent figures with So.
- a suitably learned recurrent neural network of Figure 2 provides in the second layer the observables y t -i, Ut-2, Y t -ir Ut-i, ..., etc. at the respective times.
- the matrix [Id, 0] via the matrix [Id, 0], those entries of the corresponding access Get state vectors s T , which observables correspond.
- the matrix [Id, 0] has the dimension of the state vector s T for the columns and the dimension according to the number of observables for the rows.
- the left part of the matrix forms a square identity matrix and for the rest
- the matrix contains only zeros, whereby the filtering of observables from the state vector s T is achieved.
- the observables are embedded in a large state vector s T.
- the network of Fig. 2 also illustrates a causal network, since the flow of information between the stands to the layer LI ⁇ directed forward in time from the past is carried out in the future.
- Fig. 3 shows a system based on Fig. 2 recurrent new ⁇ ronales network, now all observable throughout as Observablenvektoren y t -6, y t -s, yt + 3 are referred to.
- the notation y T thus comprises both the output variable y T and the input quantity u T from FIG. 2. This notation is also used below in all other described variants of recurrent neural networks.
- the observable vectors y t + i, y t + 2, and y t + 3 to be predicted by the network are indicated by dashed circles. That is, the current time is indicated by t in Fig. 3 and also in all other figures. Past times are thus the times t-1, t-2, etc., and future times are the times t + 1, t + 2, t + 3, and so on.
- Fig. 4 is a known variant of the learning of rekurren ⁇ th neural network shows the FIG. 3, where y d t-3, y d t-2, y d ti and y d t known Observablenvektoren accordance with predetermined Trai- beginnings data in the modeling dynamic system.
- the matrix [Id, 0] corresponds to the above-explained matrix for filtering the observables from the corresponding aggregate.
- the matrix comprises one of the number of objects
- the matrix forms a quadratic identity matrix and the remaining rows of the matrix contain only zeros.
- the network of FIG. 4 further contains the matrix C, with which a state s T is converted into a state r T.
- the state r T indicates a filtered stood to ⁇ , which contains only the hidden states of the vector s t. Consequently, the matrix C is a matrix which contains ones on the diagonal elements which correspond to the corresponding rows or columns of the hidden states and whose remaining entries are set to zero.
- the search is for the matrix A, which minimizes the quadratic error summed over the times t-m ⁇ ⁇ t between known and known observable vectors over the network.
- FIG. 5 designates a corresponding identity map for the state vector at which the arrow designated by the matrix begins.
- a target variable or a target value tar is now introduced in FIG. 5, which determines the difference vector between the observable vector y T determined by the recurrent neural network within the state vector s T and the known observer vector y d T represents.
- This target value in the ideal case ⁇ zero again serves to the corresponding determined observables known in the vectors s T by the Observables according to the training data to replace what through
- the optimization target is analogous to the network of FIG. 4 given by:
- FIG. 6 shows a preferred variant of a learning, which is also used in the neural network structure according to the invention described below.
- the difference between the recurrent neural network of Fig. 6 versus the recurrent neural network of Fig. 5 is that in the above equations (5) and (6), the position of the matrix A is interchanged with the position of the function tanh ,
- a suitable learning of a causal network with forward flow of information has been described.
- the invention is based on the finding that a causal model is not always suitable for loading ⁇ scription of a dynamic system.
- dynamic systems which also have a retro-causal information flow in the reverse direction from the future into the present.
- This is to dynamic systems in their temporal development ⁇ development and planning, including the forecast of future observables flows.
- temporal development ⁇ development and planning including the forecast of future observables flows.
- not only temporally preceding state vectors, but also predicted future state vectors are taken into account in the temporal change of a corresponding state vector of the dynamic system.
- the price is loading true not only by supply and demand, but also by planning aspects of the seller or buyer on the sale or purchase of energy or Rohstof ⁇ fen.
- the method according to the invention is based on the idea of modeling a dynamic system in such a way that an information flow is viewed not only in the causal direction from the past into the future, but also an information flow in the retro-causal direction from the future into the gang unit.
- Such an information flow can be realized by a retro-causal network.
- Such a network is shown in FIG.
- the network of FIG. 7 differs from the network of FIG. 3 in that the information flow between the states s T runs in the opposite direction from the future to the past, the method being in turn initialized with a bias So, which now, however, is a state in the future.
- the network of FIG. 7 can be learned analogously to the network of FIG. 3 via the minimization of a target value tar, as indicated in FIG. 8.
- the invention is now based on a combination of a causal network with a retro-causal network, whereby a recurrent neural network with an information flow of so ⁇ well from the past into the future as well as from the future into the past is made possible.
- Kings ⁇ nen dynamic systems are modeled, which play a role in the dynamic development of the states also predicted future states.
- 10 shows generically a combination according to the invention of a causal network with a retro-causal network, whereby a recurrent neural network is created, which can be learned in a suitable manner.
- the network is in the lower part of a causal network Nl and in the upper part of a retro-causal network N2 together.
- the network N1 corresponds to the causal network of FIG.
- the network N2 corresponds to the retro-causal network of FIG. 7, where in the retro-causal network the matrices are now denoted by A 'and the states by su', since matrices and States for the causal and the retro-causal network may be different. Both networks are coupled to each other via the corresponding observable vector y T.
- FIG. 11 shows, based on the network of FIG. 10, a learning of the network by means of teacher forcing.
- This teacher forcing has been explained separately above for the causal network in Fig. 6 and the retro-causal network in Fig. 9.
- the observables contained in the state vector s t are designated by A t and the observables contained in the state vector s t 'by A t ' for the time t.
- the sum of A t and A t ' represents the case Observablenvektor determined by the rekurren ⁇ te network is and the target value is the difference between this sum and the actual Observablenvektor y d in accordance with the training data.
- the error-repatriation with well-known weight well known from the prior art is used, which is shown in FIG.
- the error-repatriation with divided weights is achieved by the fact that in two copies of the network of FIG. 11 the error backpropagation for the causal network N 1 and once the error backpropagation for the retrospective one causal network N2 is calculated, while at the sametechnischge ⁇ assumed that always the same matrix A is used in both copies of the network and always the same matrix A 'in both copies of the network.
- the error return propagation with divided weights is well known to the person skilled in the art and will therefore not be explained in further detail.
- the described in the foregoing inventive procedural ⁇ ren has a number of advantages.
- those dynamic systems can be learned where to ⁇ future predicted states of the dynamic system play ei ⁇ ne role in the current state.
- the method can be used for different dynamic systems.
- the dynamic system can represent the development over time of energy prices or electricity prices and / or raw material prices, observables being various types of energy (eg gas, oil) and / or raw materials as well as other economic factors such as the conversion of different currencies and commodities Stock indices, can be considered.
- With a trained by training data recurrent neural network then appropriate prediction of future price developments for energy and / or raw materials can be made.
- Another area of application is the modeling of the dynamic behavior of a technical system.
- the recurrent neural network according to the invention can be used for predicting the observable states of a gas turbine and / or a wind turbine or any other technical systems.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
L'invention concerne un procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique qui est caractérisé à des instants respectifs par un vecteur d'observables comprenant une ou plusieurs observables comme entrées. Selon l'invention, l'apprentissage d'un réseau neuronal est effectué, ce réseau neuronal comprenant tant un réseau causal comprenant un flux d'informations orienté temporellement vers l'avant qu'un réseau rétrocausal comprenant un flux d'informations orienté temporellement vers l'arrière. Les états du système dynamique sont caractérisés dans le réseau causal par des premiers vecteurs d'état et dans le réseau rétrocausal par des deuxièmes vecteurs d'état lesquels contiennent respectivement des observables du système dynamique ainsi que des états cachés du système dynamique. Les deux réseaux sont liés par la combinaison des observables des premiers et deuxièmes vecteurs d'état correspondants et leur apprentissage est effectué sur la base de données d'entraînement comprenant des vecteurs d'observable connus. Le procédé selon l'invention est caractérisé en ce que les systèmes dynamiques dans lesquels de futures observables pronostiquées ont une influence sur la valeur actuelle des observables peuvent aussi être modélisés. Le procédé convient en particulier pour la modélisation de l'évolution dans le temps des prix de l'énergie et/ou des prix des matières premières. Le procédé peut aussi être utilisé pour la modélisation d'observables de systèmes techniques quelconques comme, par exemple, des turbines à gaz et/ou des éoliennes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102010014906 | 2010-04-14 | ||
PCT/EP2011/055664 WO2011128313A1 (fr) | 2010-04-14 | 2011-04-12 | Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2543006A1 true EP2543006A1 (fr) | 2013-01-09 |
Family
ID=44041664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP11714531A Withdrawn EP2543006A1 (fr) | 2010-04-14 | 2011-04-12 | Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique |
Country Status (4)
Country | Link |
---|---|
US (1) | US9235800B2 (fr) |
EP (1) | EP2543006A1 (fr) |
CN (1) | CN102934131A (fr) |
WO (1) | WO2011128313A1 (fr) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8775341B1 (en) | 2010-10-26 | 2014-07-08 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9015093B1 (en) | 2010-10-26 | 2015-04-21 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
DE102011004693A1 (de) * | 2011-02-24 | 2012-08-30 | Siemens Aktiengesellschaft | Verfahren zum rechnergestützten Lernen eines rekurrenten neuronalen Netzes zur Modellierung eines dynamischen Systems |
CN102982229B (zh) * | 2012-09-06 | 2016-06-08 | 淮阴工学院 | 一种基于神经网络的多品种商品价格预测的数据预处理方法 |
CN103530690A (zh) * | 2013-10-31 | 2014-01-22 | 中国科学院上海微系统与信息技术研究所 | 一种神经元器件及神经网络 |
KR102239714B1 (ko) | 2014-07-24 | 2021-04-13 | 삼성전자주식회사 | 신경망 학습 방법 및 장치, 데이터 처리 장치 |
US9495633B2 (en) * | 2015-04-16 | 2016-11-15 | Cylance, Inc. | Recurrent neural networks for malware analysis |
US10387775B2 (en) * | 2015-09-09 | 2019-08-20 | Emerson Process Management Power & Water Solutions, Inc. | Model-based characterization of pressure/load relationship for power plant load control |
CN108351982B (zh) * | 2015-11-12 | 2022-07-08 | 谷歌有限责任公司 | 卷积门控递归神经网络 |
WO2017114810A1 (fr) | 2015-12-31 | 2017-07-06 | Vito Nv | Procédés, dispositifs de commande et systèmes pour la commande de systèmes de distribution à l'aide d'une architecture de réseau neuronal |
EP3374932B1 (fr) * | 2016-02-03 | 2022-03-16 | Google LLC | Modèles de réseaux neuronaux récurrents compressés |
US10496996B2 (en) | 2016-06-23 | 2019-12-03 | Capital One Services, Llc | Neural network systems and methods for generating distributed representations of electronic transaction information |
US11176588B2 (en) * | 2017-01-16 | 2021-11-16 | Ncr Corporation | Network-based inventory replenishment |
US10853724B2 (en) | 2017-06-02 | 2020-12-01 | Xerox Corporation | Symbolic priors for recurrent neural network based semantic parsing |
EP3639109A4 (fr) | 2017-06-12 | 2021-03-10 | Vicarious FPC, Inc. | Systèmes et procédés de prédiction d'événement à l'aide de réseaux de schéma |
US10715459B2 (en) | 2017-10-27 | 2020-07-14 | Salesforce.Com, Inc. | Orchestration in a multi-layer network |
US11250038B2 (en) * | 2018-01-21 | 2022-02-15 | Microsoft Technology Licensing, Llc. | Question and answer pair generation using machine learning |
EP3525507B1 (fr) * | 2018-02-07 | 2021-04-21 | Rohde & Schwarz GmbH & Co. KG | Procédé et système d'essai pour essai de réseau mobile ainsi que système de prédiction |
US11194968B2 (en) * | 2018-05-31 | 2021-12-07 | Siemens Aktiengesellschaft | Automatized text analysis |
EP3751466A1 (fr) * | 2019-06-13 | 2020-12-16 | Siemens Aktiengesellschaft | Procédé de prédiction d'un niveau de pollution dans l'air |
CN111259785B (zh) * | 2020-01-14 | 2022-09-20 | 电子科技大学 | 基于时间偏移残差网络的唇语识别方法 |
US20230206254A1 (en) * | 2021-12-23 | 2023-06-29 | Capital One Services, Llc | Computer-Based Systems Including A Machine-Learning Engine That Provide Probabilistic Output Regarding Computer-Implemented Services And Methods Of Use Thereof |
CN117272303B (zh) * | 2023-09-27 | 2024-06-25 | 四川大学 | 一种基于遗传对抗的恶意代码样本变体生成方法及系统 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10324045B3 (de) * | 2003-05-27 | 2004-10-14 | Siemens Ag | Verfahren sowie Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Ermittlung eines zukünftigen Systemverhaltens eines dynamischen Systems |
JP2007280054A (ja) * | 2006-04-06 | 2007-10-25 | Sony Corp | 学習装置および学習方法、並びにプログラム |
DE102008014126B4 (de) * | 2008-03-13 | 2010-08-12 | Siemens Aktiengesellschaft | Verfahren zum rechnergestützten Lernen eines rekurrenten neuronalen Netzes |
-
2011
- 2011-04-12 WO PCT/EP2011/055664 patent/WO2011128313A1/fr active Application Filing
- 2011-04-12 CN CN2011800294457A patent/CN102934131A/zh active Pending
- 2011-04-12 US US13/640,543 patent/US9235800B2/en active Active
- 2011-04-12 EP EP11714531A patent/EP2543006A1/fr not_active Withdrawn
Non-Patent Citations (1)
Title |
---|
See references of WO2011128313A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2011128313A1 (fr) | 2011-10-20 |
US20130204815A1 (en) | 2013-08-08 |
CN102934131A (zh) | 2013-02-13 |
US9235800B2 (en) | 2016-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011128313A1 (fr) | Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique | |
EP2135140B1 (fr) | Procédé de commande et/ou de réglage assisté par ordinateur d'un système technique | |
EP2106576B1 (fr) | Procédé de commande et/ou de régulation d'un système technique assistées par ordinateur | |
EP2724296B1 (fr) | Procédé à la modélisation d'un système technique utilisant l'ordinateur | |
EP2112568B1 (fr) | Procédé de commande et/ou réglage assistées par ordinateur d'un système technique | |
EP2519861B1 (fr) | Procédé de commande et/ou de régulation assistée par ordinateur d'un système technique | |
EP2296062B1 (fr) | Procédé d'apprentissage assisté par ordinateur d'une commande et/ou d'un réglage d'un système technique | |
WO2014121863A1 (fr) | Procédé et dispositif de commande d'une installation de production d'énergie exploitable avec une source d'énergie renouvelable | |
WO2012076306A1 (fr) | Procédé de modélisation informatique d'un système technique | |
EP1052558A1 (fr) | Procédé et dispositif d'estimation d' état | |
DE102006058423A1 (de) | Verfahren und Systeme zur vorhersagenden Modellierung unter Nutzung eines Modellkollektivs | |
DE102019212773A1 (de) | Verfahren zur Stabilisierung eines elektrischen Energienetzes | |
EP1627263B1 (fr) | Procede et programme informatique comportant des moyens de code de programme et programme informatique pour determiner un comportement futur d'un systeme dynamique | |
WO2020043522A1 (fr) | Procédé de commande d'un échange d'énergie entre des sous-systèmes énergétiques dans des conditions harmonisées ; centrale de commande ; système énergétique ; programme informatique et support d'enregistrement | |
WO2012113635A1 (fr) | Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique | |
DE102008014126B4 (de) | Verfahren zum rechnergestützten Lernen eines rekurrenten neuronalen Netzes | |
DE112021005432T5 (de) | Verfahren und System zum Vorhersagen von Trajektorien zur Manöverplanung basierend auf einem neuronalen Netz | |
WO2012113634A1 (fr) | Procédé d'apprentissage assisté par ordinateur d'un réseau neuronal récurrent pour la modélisation d'un système dynamique | |
DE102011076969B4 (de) | Verfahren zum rechnergestützten Lernen einer Regelung und/oder Steuerung eines technischen Systems | |
EP3432093A1 (fr) | Procédé de modélisation d'un système dynamique par apprentissage assisté par ordinateur de modèles commandés par données | |
EP0956531A1 (fr) | Procede pour la transformation d'une logique floue servant a la simulation d'un processus technique en un reseau neuronal | |
EP3528063B1 (fr) | Procédé de création assistée par ordinateur d'un modèle pronostique destiné au pronostic d'une ou plusieurs grandeurs cibles | |
Kück | Selbination | |
EP1067444B1 (fr) | Procédé et dispositif pour modeler un système technique | |
Ranta-aho et al. | Waste analysis of a project management of small environmental monitoring projects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20121005 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: SIEMENS AKTIENGESELLSCHAFT |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20130817 |