US20150370227A1 - Controlling a Target System - Google Patents

Controlling a Target System Download PDF

Info

Publication number
US20150370227A1
US20150370227A1 US14/309,641 US201414309641A US2015370227A1 US 20150370227 A1 US20150370227 A1 US 20150370227A1 US 201414309641 A US201414309641 A US 201414309641A US 2015370227 A1 US2015370227 A1 US 2015370227A1
Authority
US
United States
Prior art keywords
control policies
control
target system
weights
policies
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/309,641
Inventor
Hany F. Bassily
Clemens Otte
Siegmund Düll
Michael Müller
Steffen Udluft
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Priority to US14/309,641 priority Critical patent/US20150370227A1/en
Assigned to SIEMENS AKTIENGESELLSCHAFT reassignment SIEMENS AKTIENGESELLSCHAFT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: UDLUFT, STEFFEN, DÜLL, Siegmund, OTTE, CLEMENS, Müller, Michael
Assigned to SIEMENS ENERGY, INC. reassignment SIEMENS ENERGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BASSILY, HANY F.
Assigned to SIEMENS AKTIENGESELLSCHAFT reassignment SIEMENS AKTIENGESELLSCHAFT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS ENERGY, INC.
Priority to CN201580032397.5A priority patent/CN106462117B/en
Priority to KR1020177001589A priority patent/KR101963686B1/en
Priority to PCT/EP2015/060298 priority patent/WO2015193032A1/en
Priority to EP15725521.7A priority patent/EP3129839B1/en
Publication of US20150370227A1 publication Critical patent/US20150370227A1/en
Priority to US15/376,794 priority patent/US10747184B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/23Pc programming
    • G05B2219/23288Adaptive states; learning transitions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25255Neural network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Definitions

  • control of complex dynamical technical systems may be optimized by data driven approaches.
  • various aspects of such dynamical systems may be improved. For example, efficiency, combustion dynamics, or emissions for gas turbines may be improved. Additionally, life-time consumption, efficiency, or yaw for wind turbines may be improved.
  • Modern data driven optimization utilizes machine learning methods for improving control policies (e.g., control strategies) of dynamical systems with regard to general or specific optimization goals.
  • control policies e.g., control strategies
  • Such machine learning methods may outperform conventional control strategies. For example, if the controlled system is changing, an adaptive control approach capable of learning and adjusting a control strategy according to the new situation and new properties of the dynamical system may be advantageous over conventional non-learning control strategies.
  • Known methods for machine learning include reinforcement learning methods that focus on data efficient learning for a specified dynamical system. However, even when using these methods, it may take some time until a good data driven control strategy is available after a change of the dynamical system. Until then, the changed dynamical system operates outside a possibly optimized envelope. If the change rate of the dynamical system is very high, only sub-optimal results for a data driven optimization may be achieved since a sufficient amount of operational data may be never available.
  • control of a target system that allows a more rapid learning of a control policy (e.g., for a changing target system) is provided.
  • Embodiments of a method, a controller, and a computer program product for controlling a target system (e.g., a gas or wind turbine or another technical system) by a processor are based on a pool of control policies.
  • the method, controller, or computer program product (non-transitory computer readable storage medium having instructions, which when executed by a processor, perform actions) is configured to receive the pool of control policies, which includes a plurality of control policies, and to receive weights for weighting each of the plurality of control policies.
  • the plurality of control policies is weighted by the weights to provide a weighted aggregated control policy.
  • the target system is controlled using the weighted aggregated control policy, and performance data relating to a performance of the controlled target system are received.
  • the weights are adjusted by the processor based on the received performance data to improve the performance of the controlled target system.
  • the plurality of control policies is reweighted by the adjusted weights to adjust the weighted aggregated control policy.
  • One or more of the present embodiments allow for an effective learning of peculiarities of the target system by adjusting the weights for the plurality of control policies.
  • Such weights may include much fewer parameters than the pool of control policies.
  • the adjusting of the weights may use much less computing effort and may converge much faster than a training of the whole pool of control policies.
  • a high level of optimization may thus be reached in a shorter time. For example, a reaction time to changes of the target system may be significantly reduced. Aggregating a plurality of control policies reduces a risk of accidentally choosing a poor policy, thus increasing the robustness of the method.
  • the weights may be adjusted by training a neural network run by the processor.
  • the usage of a neural network for the adjusting of the weights allows for an efficient learning and flexible adaptation.
  • the plurality of control policies may be calculated from different data sets of operational data of one or more source systems (e.g., by training a neural network).
  • the different data sets may relate to different source systems, to different versions of one or more source systems, to different policy models, to source systems in different climes, or to one or more source systems under different conditions (e.g., before and after repair, maintenance, changed parts, etc.).
  • the one or more source systems may be chosen similar to the target system, so that control policies optimized for the one or more source systems are expected to perform well for the target system. Therefore, the plurality of control policies based on one or more similar source systems are a good starting point for controlling the target system. Such a learning from similar situations is often denoted as “transfer learning.” Hence, much less performance data relating to the target system are used in order to obtain a good aggregated control policy for the target system. Thus, effective aggregated control policies may be learned in a short time even for target systems with scarce data.
  • the calculation of the plurality of control policies may use a reward function relating to a performance of the source systems. That reward function may also be used for adjusting the weights.
  • the performance data may include state data relating to a current state of the target system.
  • the plurality of control policies may be weighted and/or reweighted in dependence of the state data. This allows for a more accurate and more effective adjustment of the weights. For example, the weight of a control policy may be increased if a state is recognized where the control policy turned out to perform well, and vice versa.
  • the performance data may be received from the controlled target system, from a simulation model of the target system, and/or from a policy evaluation.
  • Performance data from the controlled target system allows monitoring the actual performance of the target system and may improve the performance by learning a particular response characteristic of the target system.
  • a simulation model of the target system also allows what-if queries for the reward function. With a policy evaluation, a Q-function may be set up, allowing an expectation value to be determined for the reward function.
  • An aggregated control action for controlling the target system may be determined according to the weighted aggregated control policy by weighted majority voting, by forming a weighted mean, and/or by forming a weighted median from action proposals according to the plurality of control policies.
  • the training of the neural network may be based on a reinforcement learning model, which allows an efficient learning of control policies for dynamical systems.
  • the neural network may operate as a recurrent neural network. This allows for maintaining an internal state enabling an efficient detection of time dependent patterns when controlling a dynamical system. Many Partially Observable Markov Decision Processes may be handled like Markov Decision Processes by a recurrent neural network
  • the plurality of control policies may be selected from the pool of control policies in dependence of a performance evaluation of control policies.
  • the selected control policies may establish an ensemble of control policies. For example, only those control policies may be selected from the pool of control policies that perform well according to a predefined criterion.
  • Control policies from the pool of control policies may be included into the plurality of control policies or excluded from the plurality of control policies in dependence of the adjusted weights. This allows improvement of the selection of control policies contained in the plurality of control policies. So, for example, control policies with very small weights may be removed from the plurality of control policies in order to reduce a computational effort.
  • FIG. 1 illustrates an exemplary embodiment including a target system and a plurality of source systems together with controllers generating a pool of control policies
  • FIG. 2 illustrates the target system together with a controller in greater detail.
  • FIG. 1 illustrates an exemplary embodiment including a target system TS and a plurality of source systems S 1 , . . . , SN.
  • the target system TS and the plurality of source systems S 1 , . . . , SN may be gas or wind turbines or other dynamical systems including simulation tools for simulating a dynamical system.
  • the source systems S 1 , . . . , SN are chosen to be similar to the target system TS.
  • the source systems S 1 , . . . , SN may also include the target system TS at a different time (e.g., before maintenance of the target system TS or before exchange of a system component, etc.).
  • the target system TS may be one of the source systems S 1 , . . . , SN at a later time.
  • Each of the source systems S 1 , . . . , SN is controlled by a reinforcement learning controller RLC 1 , . . . , or RLCN, respectively.
  • the reinforcement learning controllers RLC 1 , . . . , or RLCN are driven by control policies P 1 , . . . , or PN, respectively.
  • the reinforcement learning controllers RLC 1 , . . . , RLCN may each include a recurrent neural network (not shown) for learning (e.g., optimizing the control policies P 1 , . . . , PN).
  • SN are collected and stored in databases DB 1 , . . . , DBN.
  • the operational data OD 1 , . . . , ODN are processed according to the control policies P 1 , . . . , PN, and the control policies P 1 , . . . , PN are refined by reinforcement learning by the reinforcement learning controllers RLC 1 , . . . , RLCN.
  • the control output of the control policies P 1 , . . . , PN is fed back into the respective source system S 1 , . . . , or SN via a control loop CL, resulting in a closed learning loop for the respective control policy P 1 , . . .
  • control policies P 1 , . . . , PN are fed into a reinforcement learning policy generator PGEN that generates a pool P of control policies including the control policies P 1 , . . . , PN.
  • the target system TS is controlled by a reinforcement learning controller RLC including a recurrent neural network RNN and an aggregated control policy ACP.
  • the reinforcement learning controller RLC receives the control policies P 1 , . . . , PN from the reinforcement learning policy generator PGEN and generates the aggregated control policy ACP from the control policies P 1 , . . . , PN.
  • the reinforcement learning controller RLC receives performance data PD relating to a current performance of the target system TS (e.g., a current power output, a current efficiency, etc.) from the target system TS.
  • the performance data PD includes state data SD relating to a current state of the target system TS (e.g., temperature, rotation speed, etc.).
  • the performance data PD is input to the recurrent neural network RNN for training of the recurrent neural network RNN and input to the aggregated control policy ACP for generating an aggregated control action for controlling the target system TS via a control loop CL. This results in a closed learning loop for the reinforcement learning controller RLC.
  • pre-trained control policies P 1 , . . . , PN from several similar source systems S 1 , . . . , SN gives a good starting point for a neural model run by the reinforcement learning controller RLC. With that, the amount of data and/or time required for learning an efficient control policy for the target system TS may be reduced considerably.
  • FIG. 2 illustrates one embodiment of the target system TS together with the reinforcement learning controller RLC in greater detail.
  • the reinforcement learning controller RLC includes a processor PROC and, as already mentioned above, the recurrent neural network RNN and the aggregated control policy ACP.
  • the recurrent neural network RNN implements a reinforcement learning model.
  • the performance data PD(SD) including the state data SD stemming from the target system TS is input to the recurrent neural network RNN and to the aggregated control policy ACP.
  • the control policies P 1 , . . . , PN are input to the reinforcement learning controller RLC.
  • the control policies P 1 , . . . , PN may include the whole pool P or a selection of control policies from the pool P.
  • the recurrent neural network RNN is adapted to train a weighting policy WP including weights W 1 , . . . , WN for weighting each of the control policies P 1 , . . . , PN.
  • the weights W 1 , . . . , WN are initialized by initial weights IW 1 , . . . , IWN received by the reinforcement learning controller RLC (e.g., from the reinforcement learning policy generator PGEN or from a different source).
  • the aggregated control policy ACP relies on an aggregation function AF receiving the weights W 1 , . . . , WN from the recurrent neural network RNN and on the control policies P 1 , . . . , PN.
  • Each of the control policies P 1 , . . . , PN or a pre-selected part of the control policies P 1 , . . . , PN receives the performance data PD(SD) with the state data SD and calculates from the performance data PD(SD) and the state data SD a specific action proposal AP 1 , . . . , or APN, respectively.
  • APN are input to the aggregation function AF, which weights each of the action proposals AP 1 , . . . , APN with a respective weight W 1 , . . . , or WN to generate an aggregated control action AGGA.
  • the action proposals AP 1 , . . . , APN may be weighted (e.g., by majority voting, by forming a weighted mean, and/or by forming a weighted median from the control policies P 1 , . . . , PN).
  • the target system TS is controlled by the aggregated control action AGGA.
  • the performance data PD(SD) resulting from the control of the target system TS by the aggregated control action AGGA are fed back to the aggregated control policy ACP and to the recurrent neural network RNN.
  • new specific action proposals AP 1 , . . . , APN are calculated by the control policies P 1 , . . . , PN.
  • the recurrent neural network RNN uses a reward function (not shown) relating to a desired performance of the target system TS for adjusting the weights W 1 , . . . , WN in dependence of the performance data PD(SD) fed back from the target system TS.
  • WN are adjusted by reinforcement learning with an optimization goal directed to an improvement of the desired performance.
  • an update UPD of the aggregation function AF is made.
  • the updated aggregation function AF weights the new action proposals AP 1 , . . . , APN (e.g., reweights the control policies P 1 , . . . , PN) by the adjusted weights W 1 , . . . , WN in order to generate a new aggregated control action AGGA for controlling the target system TS.
  • the above acts implement a closed learning loop leading to a considerable improvement of the performance of the target system TS.
  • Each control policy P 1 , . . . , PN is initially calculated by the reinforcement learning controllers RLC 1 , . . . , RLCN based on a set of operational data OD 1 , . . . , or ODN, respectively.
  • the set of operational data for a specific control policy may be specified in multiple ways. Examples for such specific sets of operational data may be operational data of a single system (e.g., a single plant, operational data of multiple plants of a certain version, operational data of plants before and/or after a repair, or operational data of plants in a certain clime, in a certain operational condition, and/or in a certain environmental condition).
  • Different control policies from P 1 , . . . , PN may refer to different policy models trained on a same set of operational data.
  • control policies may be selected from the pool P to form an ensemble of control policies P 1 , . . . , PN.
  • Each control policy P 1 , . . . , PN provides a separate action proposal AP 1 , . . . , or APN, from the performance data PD(SD).
  • the action proposals AP 1 , . . . , APN are aggregated to calculate the aggregated control action AGGA of the aggregated control policy ACP.
  • the aggregation may be performed using majority voting. If the action proposals AP 1 , . . . , APN are continuous, a mean or median value of the action proposals AP 1 , . . . , APN may be used for the aggregation.
  • the reweighting of the control policies P 1 , . . . , PN by the adjusted weights W 1 , . . . , WN allows for a rapid adjustment of the aggregated control policy ACP, for example, if the target system TS changes.
  • the reweighting depends on the recent performance data PD(SD) generated while interacting with the target system TS. Since the weighting policy WP has less free parameters (e.g., the weights W 1 , . . . , WN) than a control policy usually has, less data is used to adjust to a new situation or to a modified system.
  • the weights W 1 , . . . , WN may be adjusted using the current performance data PD(SD) of the target system and/or using a model of the target system (e.g., implemented by an additional recurrent neural network) and/or using a policy evaluation.
  • each control policy P 1 , . . . , PN may be globally weighted (e.g., over a complete state space of the target system TS). A weight of zero may indicate that a particular control policy is not part of the ensemble of policies.
  • the weighting by the aggregation function AF may depend on the system state (e.g., on the state data SD of the target system TS). This may be used to favor good control policies with high weights within one region of the state space of the target system TS. Within other regions of the state space, the control polices may not be used at all.
  • a possible approach may be to calculate the weights W i based on distances (e.g., according to a pre-defined metric of the state space) between the current state s and states stored together with P i in a training set including states where P i performed well. Uncertainty estimates (e.g., provided by a probabilistic policy) may also be included in the weight calculation.
  • the global and/or state dependent weighting is optimized using reinforcement learning.
  • the action space of such a reinforcement learning problem is the space of the weights W 1 , . . . , WN, while the state space is defined in the state space of the target system TS.
  • the action space is only ten dimensional and, therefore, allows a rapid optimization with comparably little input data and little computational effort. Meta actions may be used to reduce the dimensionality of the action space even further. Delayed effects are mitigated by using the reinforcement learning approach.
  • the adjustment of the weights W 1 , . . . , WN may be carried out by applying a measured performance of the ensemble of control policies P 1 , . . . , PN to a reward function.
  • the reward function may be chosen according to the goal of maximizing efficiency, maximizing output, minimizing emissions, and/or minimizing wear of the target system TS.
  • a reward function used to train the control policies P 1 , . . . , PN may be used for training and/or initializing the weighting policy WP.

Abstract

For controlling a target system, such as a gas or wind turbine or another technical system, a pool of control policies is used. The pool of control policies including a plurality of control policies and weights for weighting each control policy of the plurality of control policies are received. The plurality of control policies is weighted by the weights to provide a weighted aggregated control policy. The target system is controlled using the weighted aggregated control policy, and performance data relating to a performance of the controlled target system is received. The weights are adjusted based on the received performance data to improve the performance of the controlled target system. The plurality of control policies is reweighted by the adjusted weights to adjust the weighted aggregated control policy.

Description

    BACKGROUND
  • The control of complex dynamical technical systems (e.g., gas turbines, wind turbines, or other plants) may be optimized by data driven approaches. With that, various aspects of such dynamical systems may be improved. For example, efficiency, combustion dynamics, or emissions for gas turbines may be improved. Additionally, life-time consumption, efficiency, or yaw for wind turbines may be improved.
  • Modern data driven optimization utilizes machine learning methods for improving control policies (e.g., control strategies) of dynamical systems with regard to general or specific optimization goals. Such machine learning methods may outperform conventional control strategies. For example, if the controlled system is changing, an adaptive control approach capable of learning and adjusting a control strategy according to the new situation and new properties of the dynamical system may be advantageous over conventional non-learning control strategies.
  • However, in order to optimize complex dynamical systems (e.g., gas turbines or other plants), a sufficient amount of operational data is to be collected in order to find or learn a good control strategy. Thus, in case of commissioning a new plant or upgrading or modifying the plant, it may take some time to collect sufficient operational data of the new or changed system before a good control strategy is available. Reasons for such changes may be wear, changed parts after a repair, or different environmental conditions.
  • Known methods for machine learning include reinforcement learning methods that focus on data efficient learning for a specified dynamical system. However, even when using these methods, it may take some time until a good data driven control strategy is available after a change of the dynamical system. Until then, the changed dynamical system operates outside a possibly optimized envelope. If the change rate of the dynamical system is very high, only sub-optimal results for a data driven optimization may be achieved since a sufficient amount of operational data may be never available.
  • SUMMARY AND DESCRIPTION
  • The scope of the present invention is defined solely by the appended claims and is not affected to any degree by the statements within this summary.
  • The present embodiments may obviate one or more of the drawbacks or limitations in the related art. For example, control of a target system that allows a more rapid learning of a control policy (e.g., for a changing target system) is provided.
  • Embodiments of a method, a controller, and a computer program product for controlling a target system (e.g., a gas or wind turbine or another technical system) by a processor are based on a pool of control policies. The method, controller, or computer program product (non-transitory computer readable storage medium having instructions, which when executed by a processor, perform actions) is configured to receive the pool of control policies, which includes a plurality of control policies, and to receive weights for weighting each of the plurality of control policies. The plurality of control policies is weighted by the weights to provide a weighted aggregated control policy. The target system is controlled using the weighted aggregated control policy, and performance data relating to a performance of the controlled target system are received. The weights are adjusted by the processor based on the received performance data to improve the performance of the controlled target system. The plurality of control policies is reweighted by the adjusted weights to adjust the weighted aggregated control policy.
  • One or more of the present embodiments allow for an effective learning of peculiarities of the target system by adjusting the weights for the plurality of control policies. Such weights may include much fewer parameters than the pool of control policies. Thus, the adjusting of the weights may use much less computing effort and may converge much faster than a training of the whole pool of control policies. A high level of optimization may thus be reached in a shorter time. For example, a reaction time to changes of the target system may be significantly reduced. Aggregating a plurality of control policies reduces a risk of accidentally choosing a poor policy, thus increasing the robustness of the method.
  • According to an embodiment, the weights may be adjusted by training a neural network run by the processor.
  • The usage of a neural network for the adjusting of the weights allows for an efficient learning and flexible adaptation.
  • According to a further embodiment, the plurality of control policies may be calculated from different data sets of operational data of one or more source systems (e.g., by training a neural network). The different data sets may relate to different source systems, to different versions of one or more source systems, to different policy models, to source systems in different climes, or to one or more source systems under different conditions (e.g., before and after repair, maintenance, changed parts, etc.).
  • The one or more source systems may be chosen similar to the target system, so that control policies optimized for the one or more source systems are expected to perform well for the target system. Therefore, the plurality of control policies based on one or more similar source systems are a good starting point for controlling the target system. Such a learning from similar situations is often denoted as “transfer learning.” Hence, much less performance data relating to the target system are used in order to obtain a good aggregated control policy for the target system. Thus, effective aggregated control policies may be learned in a short time even for target systems with scarce data.
  • The calculation of the plurality of control policies may use a reward function relating to a performance of the source systems. That reward function may also be used for adjusting the weights.
  • The performance data may include state data relating to a current state of the target system. The plurality of control policies may be weighted and/or reweighted in dependence of the state data. This allows for a more accurate and more effective adjustment of the weights. For example, the weight of a control policy may be increased if a state is recognized where the control policy turned out to perform well, and vice versa.
  • Advantageously, the performance data may be received from the controlled target system, from a simulation model of the target system, and/or from a policy evaluation. Performance data from the controlled target system allows monitoring the actual performance of the target system and may improve the performance by learning a particular response characteristic of the target system. A simulation model of the target system also allows what-if queries for the reward function. With a policy evaluation, a Q-function may be set up, allowing an expectation value to be determined for the reward function.
  • An aggregated control action for controlling the target system may be determined according to the weighted aggregated control policy by weighted majority voting, by forming a weighted mean, and/or by forming a weighted median from action proposals according to the plurality of control policies.
  • According to one embodiment, the training of the neural network may be based on a reinforcement learning model, which allows an efficient learning of control policies for dynamical systems.
  • For example, the neural network may operate as a recurrent neural network. This allows for maintaining an internal state enabling an efficient detection of time dependent patterns when controlling a dynamical system. Many Partially Observable Markov Decision Processes may be handled like Markov Decision Processes by a recurrent neural network
  • The plurality of control policies may be selected from the pool of control policies in dependence of a performance evaluation of control policies. The selected control policies may establish an ensemble of control policies. For example, only those control policies may be selected from the pool of control policies that perform well according to a predefined criterion.
  • Control policies from the pool of control policies may be included into the plurality of control policies or excluded from the plurality of control policies in dependence of the adjusted weights. This allows improvement of the selection of control policies contained in the plurality of control policies. So, for example, control policies with very small weights may be removed from the plurality of control policies in order to reduce a computational effort.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an exemplary embodiment including a target system and a plurality of source systems together with controllers generating a pool of control policies; and
  • FIG. 2 illustrates the target system together with a controller in greater detail.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates an exemplary embodiment including a target system TS and a plurality of source systems S1, . . . , SN. The target system TS and the plurality of source systems S1, . . . , SN may be gas or wind turbines or other dynamical systems including simulation tools for simulating a dynamical system. In one embodiment, the source systems S1, . . . , SN are chosen to be similar to the target system TS.
  • The source systems S1, . . . , SN may also include the target system TS at a different time (e.g., before maintenance of the target system TS or before exchange of a system component, etc.). Vice versa, the target system TS may be one of the source systems S1, . . . , SN at a later time.
  • Each of the source systems S1, . . . , SN is controlled by a reinforcement learning controller RLC1, . . . , or RLCN, respectively. The reinforcement learning controllers RLC1, . . . , or RLCN are driven by control policies P1, . . . , or PN, respectively. The reinforcement learning controllers RLC1, . . . , RLCN may each include a recurrent neural network (not shown) for learning (e.g., optimizing the control policies P1, . . . , PN). Source system specific operational data OD1, . . . , ODN of the source systems S1, . . . , SN are collected and stored in databases DB1, . . . , DBN. The operational data OD1, . . . , ODN are processed according to the control policies P1, . . . , PN, and the control policies P1, . . . , PN are refined by reinforcement learning by the reinforcement learning controllers RLC1, . . . , RLCN. The control output of the control policies P1, . . . , PN is fed back into the respective source system S1, . . . , or SN via a control loop CL, resulting in a closed learning loop for the respective control policy P1, . . . , or PN in the respective reinforcement learning controller RLC1, . . . , or RLCN. The control policies P1, . . . , PN are fed into a reinforcement learning policy generator PGEN that generates a pool P of control policies including the control policies P1, . . . , PN.
  • The target system TS is controlled by a reinforcement learning controller RLC including a recurrent neural network RNN and an aggregated control policy ACP. The reinforcement learning controller RLC receives the control policies P1, . . . , PN from the reinforcement learning policy generator PGEN and generates the aggregated control policy ACP from the control policies P1, . . . , PN.
  • The reinforcement learning controller RLC receives performance data PD relating to a current performance of the target system TS (e.g., a current power output, a current efficiency, etc.) from the target system TS. The performance data PD includes state data SD relating to a current state of the target system TS (e.g., temperature, rotation speed, etc.). The performance data PD is input to the recurrent neural network RNN for training of the recurrent neural network RNN and input to the aggregated control policy ACP for generating an aggregated control action for controlling the target system TS via a control loop CL. This results in a closed learning loop for the reinforcement learning controller RLC.
  • The usage of pre-trained control policies P1, . . . , PN from several similar source systems S1, . . . , SN gives a good starting point for a neural model run by the reinforcement learning controller RLC. With that, the amount of data and/or time required for learning an efficient control policy for the target system TS may be reduced considerably.
  • FIG. 2 illustrates one embodiment of the target system TS together with the reinforcement learning controller RLC in greater detail. The reinforcement learning controller RLC includes a processor PROC and, as already mentioned above, the recurrent neural network RNN and the aggregated control policy ACP. The recurrent neural network RNN implements a reinforcement learning model.
  • The performance data PD(SD) including the state data SD stemming from the target system TS is input to the recurrent neural network RNN and to the aggregated control policy ACP. The control policies P1, . . . , PN are input to the reinforcement learning controller RLC. The control policies P1, . . . , PN may include the whole pool P or a selection of control policies from the pool P.
  • The recurrent neural network RNN is adapted to train a weighting policy WP including weights W1, . . . , WN for weighting each of the control policies P1, . . . , PN. The weights W1, . . . , WN are initialized by initial weights IW1, . . . , IWN received by the reinforcement learning controller RLC (e.g., from the reinforcement learning policy generator PGEN or from a different source).
  • The aggregated control policy ACP relies on an aggregation function AF receiving the weights W1, . . . , WN from the recurrent neural network RNN and on the control policies P1, . . . , PN. Each of the control policies P1, . . . , PN or a pre-selected part of the control policies P1, . . . , PN receives the performance data PD(SD) with the state data SD and calculates from the performance data PD(SD) and the state data SD a specific action proposal AP1, . . . , or APN, respectively. The action proposals AP1, . . . , APN are input to the aggregation function AF, which weights each of the action proposals AP1, . . . , APN with a respective weight W1, . . . , or WN to generate an aggregated control action AGGA. The action proposals AP1, . . . , APN may be weighted (e.g., by majority voting, by forming a weighted mean, and/or by forming a weighted median from the control policies P1, . . . , PN). The target system TS is controlled by the aggregated control action AGGA.
  • The performance data PD(SD) resulting from the control of the target system TS by the aggregated control action AGGA are fed back to the aggregated control policy ACP and to the recurrent neural network RNN. From the fed back performance data PD(SD), new specific action proposals AP1, . . . , APN are calculated by the control policies P1, . . . , PN. The recurrent neural network RNN uses a reward function (not shown) relating to a desired performance of the target system TS for adjusting the weights W1, . . . , WN in dependence of the performance data PD(SD) fed back from the target system TS. The weights W1, . . . , WN are adjusted by reinforcement learning with an optimization goal directed to an improvement of the desired performance. With the adjusted weights W1, . . . , WN, an update UPD of the aggregation function AF is made. The updated aggregation function AF weights the new action proposals AP1, . . . , APN (e.g., reweights the control policies P1, . . . , PN) by the adjusted weights W1, . . . , WN in order to generate a new aggregated control action AGGA for controlling the target system TS. The above acts implement a closed learning loop leading to a considerable improvement of the performance of the target system TS.
  • A more detailed description of the embodiment is given below.
  • Each control policy P1, . . . , PN is initially calculated by the reinforcement learning controllers RLC1, . . . , RLCN based on a set of operational data OD1, . . . , or ODN, respectively. The set of operational data for a specific control policy may be specified in multiple ways. Examples for such specific sets of operational data may be operational data of a single system (e.g., a single plant, operational data of multiple plants of a certain version, operational data of plants before and/or after a repair, or operational data of plants in a certain clime, in a certain operational condition, and/or in a certain environmental condition). Different control policies from P1, . . . , PN may refer to different policy models trained on a same set of operational data.
  • When applying any of such control policies specific to a certain source system to a target system, the target system may not perform optimally since none of the data sets was representative for the target system. Therefore, a number of control policies may be selected from the pool P to form an ensemble of control policies P1, . . . , PN. Each control policy P1, . . . , PN provides a separate action proposal AP1, . . . , or APN, from the performance data PD(SD). The action proposals AP1, . . . , APN are aggregated to calculate the aggregated control action AGGA of the aggregated control policy ACP. In case of discrete action proposals AP1, . . . , APN, the aggregation may be performed using majority voting. If the action proposals AP1, . . . , APN are continuous, a mean or median value of the action proposals AP1, . . . , APN may be used for the aggregation.
  • The reweighting of the control policies P1, . . . , PN by the adjusted weights W1, . . . , WN allows for a rapid adjustment of the aggregated control policy ACP, for example, if the target system TS changes. The reweighting depends on the recent performance data PD(SD) generated while interacting with the target system TS. Since the weighting policy WP has less free parameters (e.g., the weights W1, . . . , WN) than a control policy usually has, less data is used to adjust to a new situation or to a modified system. The weights W1, . . . , WN may be adjusted using the current performance data PD(SD) of the target system and/or using a model of the target system (e.g., implemented by an additional recurrent neural network) and/or using a policy evaluation.
  • According to a simple implementation, each control policy P1, . . . , PN may be globally weighted (e.g., over a complete state space of the target system TS). A weight of zero may indicate that a particular control policy is not part of the ensemble of policies.
  • Additionally or alternatively, the weighting by the aggregation function AF may depend on the system state (e.g., on the state data SD of the target system TS). This may be used to favor good control policies with high weights within one region of the state space of the target system TS. Within other regions of the state space, the control polices may not be used at all.
  • Pi, i=1, . . . , N may denote a control policy from the set of stored control policies P1, . . . , PN, and s may be a vector denoting a current state of the target system TS. A weight function f(Pi,s) may assign a weight Wi (of the set W1, . . . , WN) to the respective control policy Pi dependent on the current state denoted by s (e.g., Wi=f(Pi,s)). A possible approach may be to calculate the weights Wi based on distances (e.g., according to a pre-defined metric of the state space) between the current state s and states stored together with Pi in a training set including states where Pi performed well. Uncertainty estimates (e.g., provided by a probabilistic policy) may also be included in the weight calculation.
  • In one embodiment, the global and/or state dependent weighting is optimized using reinforcement learning. The action space of such a reinforcement learning problem is the space of the weights W1, . . . , WN, while the state space is defined in the state space of the target system TS. For a pool of, for example, ten control policies, the action space is only ten dimensional and, therefore, allows a rapid optimization with comparably little input data and little computational effort. Meta actions may be used to reduce the dimensionality of the action space even further. Delayed effects are mitigated by using the reinforcement learning approach.
  • The adjustment of the weights W1, . . . , WN may be carried out by applying a measured performance of the ensemble of control policies P1, . . . , PN to a reward function. The reward function may be chosen according to the goal of maximizing efficiency, maximizing output, minimizing emissions, and/or minimizing wear of the target system TS. For example, a reward function used to train the control policies P1, . . . , PN may be used for training and/or initializing the weighting policy WP.
  • With the trained weights W1, . . . , WN, the aggregated control action AGGA may be computed according to AGGA=AF(s,AP1, . . . , APN, W1, . . . , WN), with APi=Pi(s), i=1, . . . , N.
  • The elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims can, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent, and that such new combinations are to be understood as forming a part of the present specification.
  • While the present invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.

Claims (20)

1. A method for controlling a target system by a processor based on a pool of control policies, the method comprising:
receiving the pool of control policies, the pool of control policies comprising a plurality of control policies;
receiving weights for weighting each control policy of the plurality of control policies;
weighting the plurality of control policies by the weights to provide a weighted aggregated control policy;
controlling the target system using the weighted aggregated control policy;
receiving performance data relating to a performance of the controlled target system;
adjusting the weights by the processor based on the received performance data to improve the performance of the controlled target system; and
reweighting the plurality of control policies by the adjusted weights to adjust the weighted aggregated control policy.
2. The method of claim 1, wherein adjusting the weights comprises training a neural network run by the processor.
3. The method of claim 2, further comprising:
receiving operational data of at least one source system; and
calculating the plurality of control policies from different data sets of the operational data.
4. The method of claim 3, wherein calculating the plurality of control policies comprises training the neural network or a further neural network.
5. The method of claim 3, wherein calculating the plurality of control policies comprises using a reward function relating to a performance of the at least on source system, and
wherein adjusting the weights comprises using the reward function for the adjusting of the weights.
6. The method of claim 1, wherein the performance data comprises state data relating to a current state of the target system, and
wherein the weighting of the plurality of control policies, the reweighting of the plurality of control policies, or the weighting of the plurality of control policies and the reweighting of the plurality of control policies depends on the state data.
7. The method as claimed in claim 1, wherein receiving the performance data comprises receiving the performance data from the controlled target system, from a simulation model of the target system, from a policy evaluation, or from any combination thereof.
8. The method of claim 1, wherein controlling the target system comprises determining an aggregated control action according to the weighted aggregated control policy by weighted majority voting, by forming a weighted mean, by forming a weighted median from action proposals according to the plurality of control policies, or by any combination thereof.
9. The method of claim 2, wherein the training of the neural network is based on a reinforcement learning model.
10. The method of claim 2, wherein the neural network operates as a recurrent neural network.
11. The method of claim 1, wherein the plurality of control policies is selected from the pool of control policies in dependence of a performance evaluation of control policies.
12. The method of claim 1, wherein control policies from the pool of control policies are included into or excluded from the plurality of control policies in dependence of the adjusted weights.
13. The method of claim 1, wherein the controlling, the receiving of the performance data, the adjusting, and the reweighting are run in a closed learning loop with the target system.
14. A controller for controlling a target system based on a pool of control policies, the controller being configured to:
receive the pool of control policies, the pool of control policies comprising a plurality of control policies;
receive weights for weighting each control policy of the plurality of control policies;
weight the plurality of control policies by the weights to provide a weighted aggregated control policy;
control the target system using the weighted aggregated control policy;
receive performance data relating to a performance of the controlled target system;
adjust the weights by the processor based on the received performance data to improve the performance of the controlled target system; and
reweight the plurality of control policies by the adjusted weights to adjust the weighted aggregated control policy.
15. In a non-transitory computer-readable storage medium that stores instructions executable by one or more processors to control a target system based on a pool of control policies, the instructions comprising:
receiving the pool of control policies, the pool of control policies comprising a plurality of control policies;
receiving weights for weighting each control policy of the plurality of control policies;
weighting the plurality of control policies by the weights to provide a weighted aggregated control policy;
controlling the target system using the weighted aggregated control policy;
receiving performance data relating to a performance of the controlled target system;
adjusting the weights by the processor based on the received performance data to improve the performance of the controlled target system; and
reweighting the plurality of control policies by the adjusted weights to adjust the weighted aggregated control policy.
16. The non-transitory computer-readable storage medium of claim 15, wherein adjusting the weights comprises training a neural network run by the processor.
17. The non-transitory computer-readable storage medium of claim 16, wherein the instructions further comprise:
receiving operational data of at least one source system; and
calculating the plurality of control policies from different data sets of the operational data.
18. The non-transitory computer-readable storage medium of claim 17, wherein calculating the plurality of control policies comprises training the neural network or a further neural network.
19. The non-transitory computer-readable storage medium of claim 17, wherein calculating the plurality of control policies comprises using a reward function relating to a performance of the at least on source system, and
wherein adjusting the weights comprises using the reward function for the adjusting of the weights.
20. The non-transitory computer-readable storage medium of claim 15, wherein the performance data comprises state data relating to a current state of the target system, and
wherein the weighting of the plurality of control policies, the reweighting of the plurality of control policies, or the weighting of the plurality of control policies and the reweighting of the plurality of control policies depends on the state data.
US14/309,641 2014-06-19 2014-06-19 Controlling a Target System Abandoned US20150370227A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US14/309,641 US20150370227A1 (en) 2014-06-19 2014-06-19 Controlling a Target System
CN201580032397.5A CN106462117B (en) 2014-06-19 2015-05-11 Control target system
KR1020177001589A KR101963686B1 (en) 2014-06-19 2015-05-11 Controlling a target system
PCT/EP2015/060298 WO2015193032A1 (en) 2014-06-19 2015-05-11 Controlling a target system
EP15725521.7A EP3129839B1 (en) 2014-06-19 2015-05-11 Controlling a target system
US15/376,794 US10747184B2 (en) 2014-06-19 2016-12-13 Controlling a target system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/309,641 US20150370227A1 (en) 2014-06-19 2014-06-19 Controlling a Target System

Related Child Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/060298 Continuation WO2015193032A1 (en) 2014-06-19 2015-05-11 Controlling a target system

Publications (1)

Publication Number Publication Date
US20150370227A1 true US20150370227A1 (en) 2015-12-24

Family

ID=53274489

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/309,641 Abandoned US20150370227A1 (en) 2014-06-19 2014-06-19 Controlling a Target System
US15/376,794 Active 2036-11-10 US10747184B2 (en) 2014-06-19 2016-12-13 Controlling a target system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/376,794 Active 2036-11-10 US10747184B2 (en) 2014-06-19 2016-12-13 Controlling a target system

Country Status (5)

Country Link
US (2) US20150370227A1 (en)
EP (1) EP3129839B1 (en)
KR (1) KR101963686B1 (en)
CN (1) CN106462117B (en)
WO (1) WO2015193032A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019058508A1 (en) * 2017-09-22 2019-03-28 Nec Corporation Ensemble control system, ensemble control method, and ensemble control program
JP2019067238A (en) * 2017-10-03 2019-04-25 エヌ・ティ・ティ・コミュニケーションズ株式会社 Control device, control method and control program
WO2020174262A1 (en) * 2019-02-27 2020-09-03 Telefonaktiebolaget Lm Ericsson (Publ) Transfer learning for radio resource management

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6453919B2 (en) * 2017-01-26 2019-01-16 ファナック株式会社 Behavior information learning device, behavior information optimization system, and behavior information learning program
CN109308246A (en) * 2017-07-27 2019-02-05 阿里巴巴集团控股有限公司 Optimization method, device and the equipment of system parameter, readable medium
CN109388547A (en) * 2018-09-06 2019-02-26 福州瑞芯微电子股份有限公司 A kind of method and a kind of storage equipment optimizing terminal capabilities
EP3715608B1 (en) * 2019-03-27 2023-07-12 Siemens Aktiengesellschaft Machine control based on automated learning of subordinate control skills
EP3792483A1 (en) * 2019-09-16 2021-03-17 Siemens Gamesa Renewable Energy A/S Wind turbine control based on reinforcement learning

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5734593A (en) * 1996-04-24 1998-03-31 Bei Sensors & Systems Company, Inc. Fuzzy logic controlled cryogenic cooler
US6574754B1 (en) * 2000-02-14 2003-06-03 International Business Machines Corporation Self-monitoring storage device using neural networks
US6577908B1 (en) 2000-06-20 2003-06-10 Fisher Rosemount Systems, Inc Adaptive feedback/feedforward PID controller
JP4160399B2 (en) 2001-03-01 2008-10-01 フィッシャー−ローズマウント システムズ, インコーポレイテッド Creating and displaying indicators in the process plant
JPWO2004068399A1 (en) 2003-01-31 2006-05-25 松下電器産業株式会社 Predictive action determination device and action determination method
US7184847B2 (en) * 2004-12-17 2007-02-27 Texaco Inc. Method and system for controlling a process in a plant
CN100530003C (en) * 2007-10-19 2009-08-19 西安交通大学 Heat-engine plant steel ball coal-grinding coal-grinding machine powder-making system automatic control method based on data digging
CN103034122A (en) * 2012-11-28 2013-04-10 上海交通大学 Multi-model self-adaptive controller and control method based on time series
CN103019097B (en) * 2012-11-29 2015-03-25 北京和隆优化科技股份有限公司 Optimal control system for steel rolling heating furnace
US20150301510A1 (en) * 2014-04-22 2015-10-22 Siegmund Düll Controlling a Target System

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Galtier, Ideomotor feedback control in a recurrent neural network, Biological Cybernetics 109(3), 2014, pp. 1-17 *
Nakamura, et al., Natural Policy Gradient Reinforcement Learning for a CPG Control of a Biped Robot, Parallel Problem Solving from Nature - PPSN VIII, Volume 3242, Lecture Notes in Computer Science, pp. 972-981 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019058508A1 (en) * 2017-09-22 2019-03-28 Nec Corporation Ensemble control system, ensemble control method, and ensemble control program
JP2020529664A (en) * 2017-09-22 2020-10-08 日本電気株式会社 Combination control system, combination control method, and combination control program
JP7060080B2 (en) 2017-09-22 2022-04-26 日本電気株式会社 Combination control system, combination control method, and combination control program
JP2019067238A (en) * 2017-10-03 2019-04-25 エヌ・ティ・ティ・コミュニケーションズ株式会社 Control device, control method and control program
WO2020174262A1 (en) * 2019-02-27 2020-09-03 Telefonaktiebolaget Lm Ericsson (Publ) Transfer learning for radio resource management
US11658880B2 (en) 2019-02-27 2023-05-23 Telefonaktiebolaget Lm Ericsson (Publ) Transfer learning for radio resource management

Also Published As

Publication number Publication date
CN106462117B (en) 2019-12-10
EP3129839B1 (en) 2019-06-26
WO2015193032A1 (en) 2015-12-23
KR101963686B1 (en) 2019-03-29
CN106462117A (en) 2017-02-22
KR20170023098A (en) 2017-03-02
US10747184B2 (en) 2020-08-18
US20170090429A1 (en) 2017-03-30
EP3129839A1 (en) 2017-02-15

Similar Documents

Publication Publication Date Title
US10747184B2 (en) Controlling a target system
US11326579B2 (en) Adaptive dynamic planning control method and system for energy storage station, and storage medium
EP3117274B1 (en) Method, controller, and computer program product for controlling a target system by separately training a first and a second recurrent neural network models, which are initally trained using oparational data of source systems
CN107798199B (en) Hydroelectric generating set parameter closed-loop identification method
Maity et al. Potential of support vector regression for prediction of monthly streamflow using endogenous property
CN103927580B (en) Project constraint parameter optimizing method based on improved artificial bee colony algorithm
Karaboga et al. Proportional—integral—derivative controller design by using artificial bee colony, harmony search, and the bees algorithms
CN103235620A (en) Greenhouse environment intelligent control method based on global variable prediction model
CN108181802A (en) A kind of controllable PID controller parameter optimization setting method of performance
EP4067766A1 (en) Machine learning device, demand control system, and air conditioning control system
Xuemei et al. Particle swarm optimization-based LS-SVM for building cooling load prediction
CN102663224A (en) Comentropy-based integrated prediction model of traffic flow
CN112398115A (en) Multi-time-scale thermal power-photovoltaic-pumped storage combined optimization scheduling scheme based on improved model predictive control
CN108459570B (en) Irrigation water distribution intelligent control system and method based on generation of confrontation network architecture
Chi et al. Comparison of two multi-step ahead forecasting mechanisms for wind speed based on machine learning models
JPWO2020121494A1 (en) Arithmetic logic unit, action determination method, and control program
CN110880773A (en) Power grid frequency modulation control method based on combination of data driving and physical model driving
CN113852098B (en) Automatic power generation control scheduling method based on multi-target dragonfly algorithm
CN113110061B (en) Intelligent irrigation fuzzy control method and system based on improved particle swarm optimization
Lei et al. Multi-agent path planning for unmanned aerial vehicle based on threats analysis
Avila-Miranda et al. An optimal and intelligent control strategy to ventilate a greenhouse
JP5419797B2 (en) Target tracking device
CN110908280B (en) Optimization control method for trolley-two-stage inverted pendulum system
CN113964884A (en) Power grid active frequency regulation and control method based on deep reinforcement learning
CN104573813A (en) Self-adaption artificial swarm optimization method based on historical information in running process

Legal Events

Date Code Title Description
AS Assignment

Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUELL, SIEGMUND;MUELLER, MICHAEL;OTTE, CLEMENS;AND OTHERS;SIGNING DATES FROM 20140713 TO 20140716;REEL/FRAME:034598/0574

Owner name: SIEMENS ENERGY, INC., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BASSILY, HANY F.;REEL/FRAME:034598/0592

Effective date: 20140918

Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS ENERGY, INC.;REEL/FRAME:034598/0599

Effective date: 20141023

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION