US20200327435A1 - Systems and methods for sequential power system model parameter estimation - Google Patents

Systems and methods for sequential power system model parameter estimation Download PDF

Info

Publication number
US20200327435A1
US20200327435A1 US16/572,111 US201916572111A US2020327435A1 US 20200327435 A1 US20200327435 A1 US 20200327435A1 US 201916572111 A US201916572111 A US 201916572111A US 2020327435 A1 US2020327435 A1 US 2020327435A1
Authority
US
United States
Prior art keywords
parameters
model
events
event
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/572,111
Inventor
Honggang Wang
Anup MENON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Electric Co
Original Assignee
General Electric Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Electric Co filed Critical General Electric Co
Priority to US16/572,111 priority Critical patent/US20200327435A1/en
Assigned to GENERAL ELECTRIC COMPANY reassignment GENERAL ELECTRIC COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MENON, ANUP, WANG, HONGGANG
Assigned to UNITED STATES DEPARTMENT OF ENERGY reassignment UNITED STATES DEPARTMENT OF ENERGY CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: GENERAL ELECTRIC GLOBAL RESEARCH
Publication of US20200327435A1 publication Critical patent/US20200327435A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/18Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
    • G06N7/005
    • G06F17/5009
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/008Circuit arrangements for ac mains or ac distribution networks involving trading of energy or energy transmission rights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/02CAD in a network environment, e.g. collaborative CAD or distributed simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/04Power grid distribution networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/06Wind turbines or wind farms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/06Power analysis or power optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
    • Y04S40/20Information technology specific aspects, e.g. CAD, simulation, modelling, system security

Definitions

  • the field of the invention relates generally to sequential power system model parameter estimation, and more particularly, to a system for modeling sequential power systems based on multiple events.
  • Some of the methods of performing calibration on the model include performing staged tests and direct measurement of disturbances.
  • a staged test a generator is first taken offline from normal operation. While the generator is offline, the testing equipment is connected to the generator and its controllers to perform a series of predesigned tests to derive the desired model parameters. This method may cost $15,000-$35,000 per generator per test in the United States and includes both the cost of performing the test and the cost of taking the generator off-line.
  • Phasor Measurement Units (PMUs) and Digital Fault Recorders (DFRs) have seen dramatic increasing installation in recent years, which allows for non-invasive model validation by using the sub-second-resolution dynamic data. Varying types of disturbances across locations in the power system along with large installed base of PMUs makes it possible to validate the dynamic models of the generators frequently at different operating conditions.
  • a system for sequential power system model calibration includes a computing device including at least one processor in communication with at least one memory device.
  • the at least one processor is programmed to store a model of a device.
  • the model includes a plurality of parameters.
  • the at least one processor is also programmed to receive a plurality of events associated with the device.
  • the at least one processor is further programmed to filter the plurality of events to generate a plurality of unique events.
  • the at least one processor is programmed to sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model.
  • the at least one processor is programmed to update the model to include the set of calibrated parameters.
  • a computer-implemented method for sequential power system model calibration is provided.
  • the method is implemented by a computing device including at least one processor in communication with at least one memory device.
  • the method includes storing a model of a device.
  • the model includes a plurality of parameters.
  • the method also includes receiving a plurality of events associated with the device.
  • the method further includes filtering the plurality of events to generate a plurality of unique events.
  • the method includes sequentially analyzing the plurality of unique events to determine a set of calibrated parameters for the model.
  • the method includes updating the model to include the set of calibrated parameters.
  • a non-transitory computer-readable storage media having computer-executable instructions embodied thereon When executed by a computing device having at least one processor coupled to at least one memory device, the computer-executable instructions cause the processor to store a model of a device. The model includes a plurality of parameters. The computer-executable instructions also cause the processor to receive a plurality of events associated with the device. The computer-executable instructions further cause the processor to filter the plurality of events to generate a plurality of unique events. In addition, the computer-executable instructions cause the processor to sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model. Moreover, the computer-executable instructions cause the processor to update the model to include the set of calibrated parameters.
  • FIG. 1 illustrates a block diagram of a power distribution grid.
  • FIG. 2 illustrates a high-level block diagram of a system for performing sequential calibration in accordance with some embodiments.
  • FIG. 3 illustrates a block diagram of an exemplary system architecture for sequential calibration, in accordance with one embodiment of the disclosure.
  • FIGS. 4A and 4B illustrate examples of screening sequential events using fingerprinting.
  • FIG. 4C illustrates a graph of null spaces from multiple events.
  • FIG. 5 illustrates a process for power system model parameter conditioning in accordance with some embodiments.
  • FIG. 6 is a diagram illustrating a model calibration algorithm in accordance with some embodiments.
  • FIG. 7 is a table illustrating a comparison of events calibrated for one event and all other events.
  • FIG. 8 is a table illustrating the use of the sequential estimation approach for a gas plant.
  • FIG. 9 is a table illustrating the use of the sequential estimation approach for a hydro plant.
  • FIGS. 10A and 10B illustrate a process for identifying and estimating parameters in accordance with at least one embodiment.
  • FIG. 11 is a diagram illustrating candidate parameter estimation algorithms in accordance with some embodiments.
  • FIG. 12 illustrates a two-stage approach of the process for model calibration.
  • FIG. 13 is a diagram illustrating an exemplary apparatus or platform according to some embodiments.
  • FIG. 14 is a diagram illustrating a method of performing model calibration using multiple disturbance events in accordance with at least one embodiment.
  • FIG. 15 illustrates a process for sequential calibration using the system architecture shown in FIG. 3 .
  • FIG. 16 is a data flow diagram illustrating the architecture system shown in FIG. 3 executing the sequential calibration process shown in FIG. 15 .
  • FIG. 17 is a data flow diagram illustrating the architecture system shown in FIG. 3 executing a parameter selection process in accordance with at least one embodiment.
  • a traditional simulation engine relies on differential algebraic equations (DAEs) therein to perform simulations.
  • the simulation engine may include dozens, hundreds, and the like, for a single component on the power grid.
  • DAEs differential algebraic equations
  • the simulation engine may include dozens, hundreds, and the like, for a single component on the power grid.
  • the simulation engine has a non-linear response, it is not easy to automatically extract analytical gradient information which is what is needed for optimization.
  • One simulation is the equivalent of a Jacobian Matrix Calculation which can include 200 iterations or more. Each iteration can take a minute or more. Meaning that for one simulation, the simulation engine can require at least 200 minutes of time.
  • a dynamic simulation engine is used to facilitate both identifiability of parameters (in total) and determination of parameters for calibration.
  • the simulation engine Given field data with time stamped voltage (V) and frequency (f), the simulation engine will provide the simulated active power ( ⁇ circumflex over (P) ⁇ ) and reactive ( ⁇ circumflex over (Q) ⁇ ) with the same timestamp.
  • Parameter identification involves multiple calls of simulation engines with parameter perturbation to determine the best choice of a subset of the parameters for tuning (calibration).
  • Calibration involves multiple calls of the simulation engine to search for the best value for the given subset of parameters determined in the identifiability step.
  • the example embodiments provide a predictive model which can be used to replace the dynamic simulation engine when performing the parameter identification and the parameter calibration. This is described in U.S. patent application Ser. No. 15/794,769, filed 26 Oct. 2017, the contents of which are incorporated in their entirety.
  • the model can be trained based on historical behavior of a dynamic simulation engine thereby learning patterns between inputs and outputs of the dynamic simulation engine.
  • the model can emulate the functionality performed by the dynamic simulation engine without having to perform numerous rounds of simulation. Instead, the model can predict (e.g., via a neural network, or the like) a subset of parameters for model calibration and also predict/estimate optimal parameter values for the subset of parameters in association with a power system model that is being calibrated.
  • the model may be used to capture both input-output function and first derivative of a dynamic simulation engine used for model calibration.
  • the model may be updated based on its confidence level and prediction deviation against the original simulation engine.
  • the model may be a surrogate for a dynamic simulation engine and may be used to perform model calibration without using DAE equations.
  • the system described herein may be a model parameter tuning engine, which is configured to receive the power system data and model calibration command, and search for the optimal model parameters using the surrogate model until the closeness between simulated response and the real response from the power system data meet a predefined threshold.
  • the model operates on disturbance event data that includes one or more of device terminal real power, reactive power, voltage magnitude, and phase angle data.
  • the model calibration may be triggered by user or by automatic model validation step.
  • the model may be trained offline when there is no grid event calibration task.
  • the model may represent a set of different models used for different kinds of events.
  • the model's input may include at least one of voltage, frequency and other model tunable parameters.
  • the model may be a neural network model, fuzzy logic, a polynomial function, and the like.
  • Other model tunable parameters may include a parameter affecting dynamic behavior of machine, exciter, stabilizer and governor.
  • the surrogate model's output may include active power, reactive power or both.
  • the optimizer may be gradient based method including Newton-like methods.
  • the optimizer may be gradient free method including pattern search, genetic algorithm, simulated annealing, particle swarm optimizer, differential evolution, and the like.
  • the model utilizes multiple disturbance events to validate and calibrate power system models for compliance with NERC mandated grid reliability requirements.
  • the Sequential model calibration system described herein comprises three steps.
  • the first step is Sequential event selection. This uses a similarity-based screening approach, where the event's dynamic feature is coded as bit-string. The system considers not only active power (P) and reactive power (Q) but also voltage (U) and frequency (F). In some embodiments, the Tanimoto coefficient is used for similarity metrics.
  • the second step is Sequential parameter identifiability, which includes selecting the most sensitive parameter subset based on increasing events.
  • the third step includes Bayesian optimization. This includes determining the new parameter values by considering the deviation from previous parameter estimates. The weight for the penalty is derived from a Bayesian argument.
  • FIG. 1 illustrates a power distribution grid 100 .
  • the grid 100 includes a number of components, such as power generators 110 .
  • power generators 110 In some cases, planning studies conducted using dynamic models predict stable grid 100 operation, but the actual grid 100 may become unstable in a few minutes with severe swings (resulting in a massive blackout).
  • the North American Electric Reliability Coordinator (“NERC”) requires generators 110 above 10 MVA to be tested every five years to check the accuracy of dynamic models and let the power plant dynamic models be updated as necessary.
  • the systems described herein consider not only active power (P) and reactive power (Q) but also voltage (U) and frequency (F).
  • a generator 110 is first taken offline from normal operation. While the generator 110 is offline, testing equipment is connected to the generator 110 and its controllers to perform a series of pre-designed tests to derive the desired model parameters.
  • PMUs 120 and Digital Fault Recorders (“DFRs”) 130 have seen dramatic increasing installation in recent years, which may allow for non-invasive model validation by using the sub-second-resolution dynamic data. Varying types of disturbances across locations in the grid 100 along with the large installed base of PMUs 120 may, according to some embodiments, make it possible to validate the dynamic models of the generators 110 frequently at different operating conditions.
  • model calibration is a process that seek multiple (dozens or hundreds) of model parameters, which could suffer from local minimum and multiple solutions.
  • an algorithm to enhance the quality of a solution within a reasonable amount time and computation burdens.
  • Online performance monitoring of power plants using synchrophasor data or other high-resolution disturbance monitoring data acts as a recurring test to ensure that the modeled response to system events matches actual response of the power plant or generating unit. From the Generator Owner (GO)'s perspective, online verification using high resolution measurement data can provide evidence of compliance by demonstrating the validity of the model by online measurement. Therefore, it is a cost-effective approach for GO as they may not have to take the unit offline for testing of model parameters.
  • Online performance monitoring requires that disturbance monitoring equipment such as a PMU be located at the terminals of an individual generator or Point of Interconnection (POI) of a power plant.
  • PMU Point of Interconnection
  • the disturbance recorded by PMU normally consists of four variables: voltage, frequency, active power and reactive power.
  • the play in or playback simulation has been developed and they are now available in all major grid simulators.
  • the simulated output including active power and reactive power will be generated and can be further compared with the measured active power and reactive power.
  • FIG. 2 is a high-level block diagram of a system 200 in accordance with some embodiments.
  • the system 200 includes one or more measurement units 210 (e.g., PMUs, DFRs, or other devices to measure frequency, voltage, current, or power phasors) that store information into a measurement data store 220 .
  • PMU might refer to, for example, a device used to estimate the magnitude and phase angle of an electrical phasor quantity like voltage or current in an electricity grid using a common time source for synchronization.
  • DFR might refer to, for example, an Intelligent Electronic Device (“IED”) that can be installed in a remote location, and acts as a termination point for field contacts.
  • IED Intelligent Electronic Device
  • the measurement data might be associated with disturbance event data and/or data from deliberately performed unit tests.
  • a model parameter tuning engine 250 may access this data and use it to tune parameters for a dynamic system model 260 .
  • the process might be performed automatically or be initiated via a calibration command from a remote operator interface device 290 .
  • the term “automatically” may refer to, for example, actions that can be performed with little or no human intervention.
  • power systems may be designed and operated using mathematical models (power system models) that characterize the expected behavior of power plants, grid elements, and the grid as a whole. These models support decisions about what types of equipment to invest in, where to put it, and how to use it in second-to-second, minute-to-minute, hourly, daily, and long-term operations.
  • power system models power system models
  • These models support decisions about what types of equipment to invest in, where to put it, and how to use it in second-to-second, minute-to-minute, hourly, daily, and long-term operations.
  • a generator, load, or other element of the system does not act in the way that its model predicts, the mismatch between reality and model-based expectations can degrade reliability and efficiency. Inaccurate models have contributed to a number of major North American power outages.
  • the behavior of power plants and electric grids may change over time and should be checked and updated to assure that they remain accurate.
  • Engineers use the processes of validation and calibration to make sure that a model can accurately predict the behavior of the modeled object. Validation assures that the model accurately represents the operation of the real system—including model structure, correct assumptions, and that the output matches actual events.
  • a calibration process may be used to make minor adjustments to the model and its parameters so that the model continues to provide accurate outputs.
  • High-speed, time-synchronized data, collected using PMUs may facilitate model validation of the dynamic response to grid events.
  • Grid operators may use, for example, PMU data recorded during normal plant operations and grid events to validate grid and power plant models quickly and at lower cost.
  • the grid operator can also diagnose the causes of operating events, such as wind-driven oscillations, and identify appropriate corrective measures before those oscillations spread to harm other assets or cause a loss of load.
  • devices may exchange information via any communication network which may be one or more of a Local Area Network (“LAN”), a Metropolitan Area Network (“MAN”), a Wide Area Network (“WAN”), a proprietary network, a Public Switched Telephone Network (“PSTN”), a Wireless Application Protocol (“WAP”) network, a Bluetooth network, a wireless LAN network, and/or an Internet Protocol (“IP”) network such as the Internet, an intranet, or an extranet.
  • LAN Local Area Network
  • MAN Metropolitan Area Network
  • WAN Wide Area Network
  • PSTN Public Switched Telephone Network
  • WAP Wireless Application Protocol
  • Bluetooth a Bluetooth network
  • wireless LAN network a wireless LAN network
  • IP Internet Protocol
  • any devices described herein may communicate via one or more such communication networks.
  • the model parameter tuning engine 250 may store information into and/or retrieve information from various data stores, which may be locally stored or reside remote from the model parameter tuning engine 250 . Although a single model parameter tuning engine 250 is shown in FIG. 2 , any number of such devices may be included. Moreover, various devices described herein might be combined according to embodiments of the present invention. For example, in some embodiments, the measurement data store 220 and the model parameter tuning engine 250 might comprise a single apparatus.
  • the system 200 functions may be performed by a constellation of networked apparatuses, such as in a distributed processing or cloud-based architecture.
  • a user may access the system 200 via the device 290 (e.g., a Personal Computer (“PC”), tablet, or smartphone) to view information about and/or manage operational information in accordance with any of the embodiments described herein.
  • the device 290 e.g., a Personal Computer (“PC”), tablet, or smartphone
  • an interactive graphical user interface display may let an operator or administrator define and/or adjust certain parameters (e.g., when a new electrical power grid component is calibrated) and/or provide or receive automatically generated recommendations or results from the system 200 .
  • the example embodiments provide a predictive model which can be used to replace the dynamic simulation engine when performing the parameter identification and the parameter calibration.
  • the model can be trained based on historical behavior of a dynamic simulation engine thereby learning patterns between inputs and outputs of the dynamic simulation engine.
  • the model can emulate the functionality performed by the dynamic simulation engine without having to perform numerous rounds of simulation. Instead, the model can predict (e.g., via a neural network, or the like) a subset of parameters for model calibration and also predict/estimate optimal parameter values for the subset of parameters in association with a power system model that is being calibrated.
  • the model may be used to capture both input-output function and first derivative of a dynamic simulation engine used for model calibration.
  • the model may be updated based on its confidence level and prediction deviation against the original simulation engine.
  • the model may be a surrogate for a dynamic simulation engine and may be used to perform model calibration without using DAE equations.
  • the system described herein may be a model parameter tuning engine, which is configured to receive the power system data and model calibration command, and search for the optimal model parameters using the surrogate model until the closeness between simulated response and the real response from the power system data meet a predefined threshold.
  • the model operates on disturbance event data that includes one or more of device terminal real power, reactive power, voltage magnitude, and phase angle data.
  • the model calibration may be triggered by user or by automatic model validation step.
  • the model may be trained offline when there is no grid event calibration task.
  • the model may represent a set of different models used for different kinds of events.
  • the model's input may include at least one of voltage, frequency and other model tunable parameters.
  • the model may be a neural network model, fuzzy logic, a polynomial function, and the like.
  • Other model tunable parameters may include a parameter affecting dynamic behavior of machine, exciter, stabilizer and governor.
  • the surrogate model's output may include active power, reactive power or both.
  • the optimizer may be gradient based method including Newton-like methods.
  • the optimizer may be gradient free method including pattern search, genetic algorithm, simulated annealing, particle swarm optimizer, differential evolution, and the like.
  • FIG. 3 illustrates a block diagram of an exemplary system architecture 300 for sequential calibration, in accordance with one embodiment of the disclosure.
  • the system 300 receives a plurality of events 314 , 316 , and 318 sequentially.
  • the events 314 , 316 , and 318 are received by the event screening component 302 , which screens which events 314 , 316 , and 318 are to be analyzed.
  • Events 314 , 316 , and 318 are where the voltage and/or the frequency of the power system changes.
  • the event screening component 302 determines whether the event 314 , 316 , and 318 is novel enough.
  • an event 316 may be a generator turning on.
  • the event screening component 302 skips this event 316 .
  • the event screening component 302 compares the event 316 to those events stored in a database 310 . If the event 316 is novel enough, then the event 316 is stored in the database 310 . Then the event 316 is sent to the parameter identifiability component 304 . This component 304 analyzes the event 316 in combination with past events and the parameters identified as significant with those events to determine which parameters are significant for this event 316 .
  • the tunable parameters are transmitted to the Bayesian Optimization component 306 , which further analyzes the significant parameters to calibrate the parameters in the model being executed by the simulation engine 308 to get a final set of calibrated parameters 312 .
  • the steps in this process are further described below.
  • the first step is the sequential event screening.
  • the goal is to screen only the representative or most characteristic events among all events so as to get a faster calculation and avoid overfitting to some specific events.
  • the underlying assumption is that similar input/output (IO) curve features lead to similar dynamics, which in turn leads to similar dynamic parameters.
  • bit-string for an event is similar as the fingerprint in medicine molecules analysis.
  • the fingerprint comprises of a long string with each bit set to either zero or one.
  • Each bit in the fingerprint corresponds to feature of the event, and that bit will be set or not, according to whether the given event has the feature.
  • the feature of event may comprise peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, steady state gain (P, Q, u, f), of the event.
  • the feature is extracted from the time series of active power, reactive power, voltage and frequency.
  • a counting vector allows for a more detailed description of the event as a multi-set of features, where a binary fingerprint as introduced above simply describes the event as a set of features.
  • counting vectors can easily be converted into binary vectors. An example of the counting vectors and binary vectors may be seen in FIG. 4A .
  • Tanimoto coefficient S T (A, B) is:
  • Tanimoto coefficient quantifies the similarity between two bit-strings as a number in the interval [0;1], where 0 says that the two bit-strings have no one-bits in common, and 1 says that the two bit-strings are equal.
  • the second step is the Sequential parameter identifiability.
  • the goal of this step is to perform a comprehensive identifiability study across multiple events and provide an identifiable parameter set for the simultaneous calibration which tunes the most identifiable parameters to match the measurement of multiple events simultaneously.
  • the algorithm first generates the trajectory sensitivity matrices for all the selected disturbances by perturbing each parameter and feeding the perturbed parameter values to a playback simulation platform. Then the algorithm provides two options depending on the number of disturbances being considered. If the number of disturbances is large enough that the union of null spaces of all the disturbances has a rank higher than parameter number, the algorithm solves an optimization problem to find a solution that has the minimum total distance to all the null spaces. Such a solution gives a comprehensive identifiability ranking of parameters across disturbances. If the number of disturbances is small, the second option will be taken, which evaluates the identifiability of parameters for each disturbance, then calculates the average identifiability ranking across disturbances. Since the sensitivity studies are conducted at the parameters' default values, the conditioning tool also performs a global sensitivity consistency study when the parameters' values deviate far away from their default values. Such study portraits the geometry of the parameter sensitivity in the entire parameter space.
  • N When N events are considered, applying singular value decomposition (SVD) to the sensitivity trajectory matrices result in N null spaces.
  • the null space for one event also can be interpreted as a system of homogeneous algebraic equations with parameter sensitivities being the unknowns. Since the null space from one event has a rank lower than the number of parameters, the number of equations is less than the number of unknowns. Considering more events is equivalent to adding more equations to the system. After the event number exceeds certain value (also the characteristic of events should be diverse), the system would have more equations than unknowns.
  • FIG. 4C demonstrates how to use null spaces from multiple events.
  • the three lines shown in FIG. 4C correspond to the vectors that space the null space of the sensitivity matrix for multiple events.
  • the point that is nearest in distance to all three lines represents the relationship among the dependent parameters satisfied across all of those events.
  • the sensitivity magnitude vector M sen ⁇ R N p ⁇ 1 of all parameters is the solution of the following optimization problem:
  • Nevent is the number of events
  • Nnull is the size of null space of the sensitivity matrix.
  • the above optimization problem may be solved using a standard Linear Programming approach.
  • the third step is the Bayesian Optimization. Since grid disturbances occur intermittently, the user of the calibration tool may be required to re-calibrate model parameters in a sequential manner as new disturbances come in. In this scenario, the user has a model that was calibrated to some observed grid disturbances to start with, and observes a larger that acceptable mismatch with a newly encountered disturbance. The task now is to tweak the model parameters so that the model explains the new disturbance without detrimentally affecting the match with earlier disturbances. On potential solution is to run calibration simultaneously on all events of interest strung together; however, this comes at the cost of significant computational expense and engineering involved in enabling running a batch of events simultaneously. One more efficient method may be to carry some essential information from the earlier calibrations runs and guide the subsequent calibration run that helps explain the new disturbance without losing earlier calibration matches.
  • the framework of Bayesian estimation may be used to develop a sequential estimation capability into the existing calibration framework.
  • the true posterior distribution of parameters (assuming Gaussian priors) after the calibration process may be quite complicated due to the nonlinearity of the models.
  • One approach in sequential estimation is to consider a Gaussian approximation of this posterior as is done in Kalman filtering approaches to sequential nonlinear estimation. In a nonlinear least squares approach, this simplifies down to a quadratic penalty term for deviations from the previous estimates, and the weights for this quadratic penalty come from a Bayesian argument.
  • the measured values of P and Q may be represented by a simulated value plus an error term.
  • the errors may be subject to Normal distribution, either independently or else with errors correlated in some known way, such as, but not limited to, multivariate Normal distribution.
  • the above may be used to find the parameters of a model b from the data.
  • the parameter value b 0 that minimizes x 2 may be calculated using a Taylor series approximation.
  • ⁇ b is the covariance of “standard error” matrix of the fitted parameters.
  • FIG. 5 is a process 500 for power system model parameter conditioning according to some embodiments.
  • disturbance data may be obtained (e.g., from a PMU or DFR) to obtain, for example, V, f, P, and Q measurement data at a Point Of Interest (“POI”).
  • POI Point Of Interest
  • a playback simulation may run load model benchmarking using default model parameters (e.g., associated with a Positive Sequence Load Flow (“PSLF”) or Transient Security Assessment Tool (“TSAT”)).
  • PSLF Positive Sequence Load Flow
  • TSAT Transient Security Assessment Tool
  • model validation may compare measurements to default model response. If the response matches the measurements, the framework may end (e.g., the existing model is sufficiently correct and does not need to be updated).
  • an event analysis algorithm may determine if event is qualitatively different from previous events.
  • a parameter identifiability analysis algorithm may determine most identifiable set of parameters across all events of interest. For example, a first event may have 90 to 100 parameters. For that event, Step 525 uses the parameter identifiability algorithm to select 1 to 20 of those parameters.
  • Step 530 an Unscented Kalman Filter (“UKF”)/optimization-based parameter estimation algorithm/process may be performed.
  • the estimated parameter values, confidence metrics, and error in model response (aa compared to measurements) may be reported.
  • Steps 505 - 515 are considered model validation 535 and Steps 520 - 530 are considered model calibration 540 .
  • the systems may use one or both of model validation 535 and model calibration 540 .
  • Steps 505 - 530 are considered a model validation and calibration (MVC) process 500 .
  • MVC model validation and calibration
  • Disturbance data may be monitored by one or more PMUs coupled to an electrical power distribution grid may be received.
  • the disturbance data can include voltage (“V”), frequency (“f”), and/or active and nonactive reactive (“P” and “Q”) power measurements from one or more points of interest (POI) on the electrical power grid.
  • a power system model may include model parameters. These model parameters can be the current parameters incorporated in the power system model. The current parameters can be stored in a model parameter record. Model calibration involves identifying a subset of parameters that can be “tuned” and modifying/adjusting the parameters such that the power system model behaves identically or almost identically to the actual power component being represented by the power system model.
  • the model calibration can implement model calibration with three functionalities.
  • the first functionality is an event screening tool to select characteristics of a disturbance event from a library of recorded event data. This functionality can simulate the power system responses when the power system is subjected to different disturbances.
  • the second functionality is a parameter identifiability study. When implementing this functionality, the can simulate the response(s) of a power system model.
  • the third functionality is simultaneous tuning of models using event data to adjust the identified model parameters.
  • the second functionality (parameter identifiability) and the third functionality (tuning of model parameters) may be done using a surrogate model in place of a dynamic simulation engine.
  • Event screening can be implemented during the simulation to provide computational efficiency. If hundreds of events are stitched together and fed into the calibration algorithm unselectively, the algorithm may not be able to converge. To maintain the number of events manageable and still keep an acceptable representation of all the events, a screening procedure may be performed to select the most characteristic events among all. Depending on the type of events, the measurement data could have different characteristics. For example, if an event is a local oscillation, the oscillation frequency in the measurement data would be much faster as compared to an inter-area oscillation event. In some implementations, a K-medoids clustering algorithm can be utilized to group events with similar characteristic together, thus reducing the number of events to be calibrated.
  • the surrogate model or models (such as Neural Networks) with equivalent function of dynamic simulation engine, may be used for both identifiability and calibration.
  • the surrogate model may be built offline while there is no request for model calibration. Once built, the surrogate model comprising a set of weights and bias in learned structure of network will be used to predict the active power ( ⁇ circumflex over (P) ⁇ ) and reactive ( ⁇ circumflex over (Q) ⁇ ) given different set of parameters together with time stamped voltage (V) and frequency (f).
  • the parameter identifiability analysis addresses two aspects: (a) magnitude of sensitivity of output to parameter change; and (b) dependencies among different parameter sensitivities. For example, if the sensitivity magnitude of a particular parameter is low, the parameter would appear in a row being close to zero in the parameter estimation problem's Jacobian matrix. Also, if some of the parameter sensitivities have dependencies, it reflects that there is a linear dependence among the corresponding rows of the Jacobian. Both these scenarios lead to singularity of the Jacobian matrix, making the estimation problem infeasible. Therefore, it may be important to select a subset of parameters which are highly sensitive as well as result in no dependencies among parameter sensitivities. Once the subset of parameters is identified, values in the active power system model for the parameters may be updated, and the system may generate a report and/or display of the estimated parameter values(s), confidence metrics, and the model error response as compared to measured data.
  • FIG. 6 illustrates a model calibration algorithm that can be used by the model calibration algorithm component in accordance with some embodiments.
  • the model calibration algorithm attempts to find a parameter value ( ⁇ *) for a parameter (or parameters) of the power system model that creates a matching output between the simulated active power ( ⁇ circumflex over (P) ⁇ ) and the simulated reactive power ( ⁇ circumflex over (Q) ⁇ ) predicted by the model with respect to the actual active power (P) and actual reactive power (Q) of the component on the electrical grid.
  • the user of the calibration tool described herein may be required to re-calibrate model parameters in a sequential manner as new disturbances come in.
  • the user has a model that was calibrated to some observed grid disturbances to start with, and observes a larger that acceptable mismatch with a newly encountered disturbance.
  • the task now is to tweak the model parameters so that the model explains the new disturbance without detrimentally affecting the match with earlier disturbances.
  • One solution would be to run calibration simultaneously on all events of interest strung together but this comes at the cost of significant computational expense and engineering involved in enabling running a batch of events simultaneously. It would be far more preferable to carry some essential information from the earlier calibrations runs and guide the subsequent calibration run that helps explain the new disturbance without losing earlier calibration matches.
  • the framework of Bayesian estimation may be used to develop a sequential estimation capability into the existing calibration framework.
  • the true posterior distribution of parameters (assuming Gaussian priors) after the calibration process can be quite complicated due to the nonlinearity of the models.
  • the typical approach in sequential estimation is to consider a Gaussian approximation of this posterior as is done in Kalman filtering approaches to sequential nonlinear estimation. In our nonlinear least squares approach, this boils down to a quadratic penalty term for deviations from the previous estimates, and the weights for this quadratic penalty come from a Bayesian argument.
  • FIG. 7 displays example results of the performance (in root mean square [r.m.s.] terms) of events calibrated for only one event (in corresponding column) evaluated against all other events (listed in the rows).
  • FIG. 8 displays example results of the sequential estimation module being implemented and tested with a test data set for a gas plant.
  • FIG. 9 displays example results of the sequential estimation module being implemented and tested with a test data set for a hydro plant.
  • FIG. 7 shows example results without sequential estimation.
  • the calibration algorithm was executed from scratch for each of the 12 events for the gas plant case and obtain 12 sets of calibrated parameters. Then a model validation exercise is executed the model response resulting from each of these 12 sets of calibrated parameters for each of the 12 events is compared.
  • the root mean square (r.m.s) errors between measured and simulated real and reactive (P and Q resp.) power responses are shown in FIG. 7 .
  • FIG. 7 there are a lot of “reds” in this table, which mean that if the model is tuned to only one event (irrespective of the event), one cannot expect it to necessarily explain all other events. This motivates the need for sequential estimation.
  • FIGS. 8 and 9 illustrate the mismatch in model response for each event for the model parameters obtained at the end of sequential estimation for gas plant and hydro plant, respectively.
  • Sequential estimation refers to calibrating the model one event at a time, sequentially, while carrying forward some information from the previous runs as described earlier.
  • the example results shown in FIGS. 8 and 9 show a marked improvement over the results shown in FIG. 7 as the parameters at the end of the sequential run are able to explain most events better than the default set. For example, there are two columns in each of FIGS.
  • the performance of this approach also appears competitive and in some instances better that the main sequential approach. While this can be explained in the noise-free case evaluated here, the sequential approach is expected to perform more robustly in the presence of noise.
  • the last two rows compare the estimated parameter set with the true parameter set in terms of normalized 2-norm and infinity-norm. The performance between the two options is inconclusive between the two cases (but slightly in the favor of the main sequential approach).
  • FIGS. 10A and 10B illustrate a process for identifying and estimating parameters in accordance with at least one embodiment.
  • the raw parameters are analyzed to be identified. Some of the parameters are then down selected, which then leads to the parameter estimation.
  • FIG. 11 illustrates candidate parameter estimation algorithms 1100 according to some embodiments.
  • measured input/output data 1110 u, y m
  • power system component model 1122 may be used by a power system component model 1122 and an UKF based approach 1124 to create an estimation parameter (p*) 1140 .
  • the system may compute sigma points based on covariance and standard deviation information.
  • the Kalman Gain matrix K may be computed based on ⁇ and the parameters may be updated based on:
  • the measured input/output data 1110 (u, y m ) may be used by a power system component model 1132 and an optimization-based approach 1134 to create the estimation parameter (p*) 1140 .
  • the following optimization problem may be solved:
  • the system may then compute output as compared to parameter Jacobian information and iteratively solve the above optimization problem by moving parameters in directions indicated by the Jacobian information.
  • FIG. 12 illustrates a two-stage approach of the process for model calibration.
  • PMU data from events is fed into a dynamic simulation engine.
  • the dynamic simulation engine communicates with a parameter identifiability analysis component and returns the changes to the parameters.
  • the parameter identifiability analysis component also transmits a set of identifiable parameters to a model calibration algorithm component.
  • the model calibration algorithm component uses the set of identifiable parameters, PMU data from events, and other data from the dynamic simulation engine to generate estimated parameters. This approach may be used to calibrate the tuning model parameters.
  • model validation With the playback simulation capability, the user can compare the response (active power and reactive power) of system models with dynamics observed during disturbances in the system, which is called model validation.
  • the grid disturbance aka. events
  • model calibration As shown in the right side of the FIG. 12 , the goal is to achieve a satisfactory match between the measurement data and simulated response. If obvious a discrepancy is observed, then the model calibration process may be employed.
  • the first step of the model calibration process is parameter identification, which aims to identify a subset of parameters with strong sensitivity to the observed event.
  • the model calibration process requires a balance on matching in measurement space and reasonableness in the model parameter space. Numerical curve fitting without adequate engineering guidance tends to provide overfitted parameter results, and leads to non-unique sets of parameters (leading to same curve fitting performance), which should be avoided.
  • FIG. 13 is a block diagram of an apparatus or platform 1300 that may be, for example, associated with the system 200 of FIG. 2 and/or any other system described herein.
  • the platform 1300 comprises a processor 1310 , such as one or more commercially available Central Processing Units (“CPUs”) in the form of one-chip microprocessors, coupled to a communication device 1320 configured to communicate via a communication network (not shown in FIG. 13 ).
  • the communication device 1320 may be used to communicate, for example, with one or more remote measurement units, components, user interfaces, etc.
  • the platform 1300 further includes an input device 1340 (e.g., a computer mouse and/or keyboard to input power grid and/or modeling information) and/an output device 1350 (e.g., a computer monitor to render a display, provide alerts, transmit recommendations, and/or create reports).
  • an input device 1340 e.g., a computer mouse and/or keyboard to input power grid and/or modeling information
  • an output device 1350 e.g., a computer monitor to render a display, provide alerts, transmit recommendations, and/or create reports.
  • a mobile device, monitoring physical system, and/or PC may be used to exchange information with the platform 1300 .
  • the processor 1310 also communicates with a storage device 1330 .
  • the storage device 1330 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., a hard disk drive), optical storage devices, mobile telephones, and/or semiconductor memory devices.
  • the storage device 1330 stores a program 1312 and/or a power system disturbance based model calibration engine 1314 for controlling the processor 1310 .
  • the processor 1310 performs instructions of the programs 1312 , 1314 , and thereby operates in accordance with any of the embodiments described herein.
  • the processor 1310 may calibrate a dynamic simulation engine, having system parameters, associated with a component of an electrical power system (e.g., a generator, wind turbine, etc.).
  • the processor 1310 may receive, from a measurement data store 1360 , measurement data measured by an electrical power system measurement unit (e.g., a phasor measurement unit, digital fault recorder, or other means of measuring frequency, voltage, current, or power phasors). The processor 1310 may then pre-condition the measurement data and set-up an optimization problem based on a result of the pre-conditioning.
  • the system parameters of the dynamic simulation engine may be determined by solving the optimization problem with an iterative method until at least one convergence criteria is met. According to some embodiments, solving the optimization problem includes a Jacobian approximation that does not call the dynamic simulation engine if an improvement of residual meets a pre-defined criterion.
  • the programs 1312 , 1314 may be stored in a compressed, uncompiled and/or encrypted format.
  • the programs 1312 , 1314 may furthermore include other program elements, such as an operating system, clipboard application, a database management system, and/or device drivers used by the processor 1310 to interface with peripheral devices.
  • information may be “received” by or “transmitted” to, for example: (i) the platform 1300 from another device; or (ii) a software application or module within the platform 1300 from another software application, module, or any other source.
  • FIG. 14 illustrates a method 1400 of performing model calibration using a surrogate model in accordance with some embodiments.
  • the method 1400 may be performed by a computing system such as a web server, a user device, a database, an on-premises server, a cloud platform, a desktop PC, a mobile device, and the like.
  • the computing device receives 1410 a plurality of sequential events as described herein.
  • the computing device filters 1420 the sequential events using similarity screening, where new events are evaluated to determine if they are different from the previously received events.
  • the event's dynamic features are coded as a bit-string. The features considered include P, Q, U, and F.
  • the Tanimoto coefficient is used for the similarity metrics.
  • the computing device identifies 1430 the sequential parameters based on sensitivity, where the most sensitive parameter subset is determined based on any increasing number of events. Then the computing device performs 1440 Bayesian optimization to determine new parameter values by considering deviation from previous parameter estimates. In some embodiments, the weight for the penalty is determined from a Bayesian argument. When a new event is received 1410 , then the process 1400 is performed again to re-adjust for the new event.
  • the parameter determination for the subsequent event considers both the residual of simulated response and the statistical information of previous determined parameter values based on previous event.
  • the events are filtered/screened using an event screening process that is based on features of the events including peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, steady state gain (P, Q, u, f) extracted from the time series of active power, reactive power, voltage and frequency.
  • the system 300 (shown in FIG. 3 ) stores a model of a device, such as generator 110 .
  • the model includes a plurality of parameters.
  • the system 300 receives a plurality of events 314 , 316 , and 318 (shown in FIG. 1 ) associated with the device.
  • the events 314 , 316 , and 318 include sensor information of the event 314 , 316 , and 318 occurring at the device.
  • the sensor information is associated with a similar device.
  • the system 300 filters the plurality of events to generate a plurality of unique events.
  • the system 300 sequentially analyzes the plurality of unique events to determine a set of calibrated parameters 312 (shown in FIG. 3 ) for the model.
  • the system 300 updates the model to include the set of calibrated parameters 312 .
  • the system 300 executes the model based on one or more events of the plurality of events 314 , 316 , and 318 to generate one or more results and identifies one or more sensitive parameters, such as tunable parameters based on the one or more results.
  • the system 300 may perform a Bayesian optimization on the one or more sensitive parameters to determine updated values for the one or more sensitive parameters.
  • the system 300 performs the Bayesian optimization by determining the updated values for the one or more sensitive parameters based on a nonlinear optimization.
  • the objective function of the nonlinear optimization includes two terms. The first term is calculated as the residual between a simulated response based on the calibrated parameter and the measured response.
  • the second term is calculated as a quadratic penalty term for deviations of parameters from one or more previous estimates.
  • the weights for the quadratic penalty are derived from a Bayesian argument.
  • the system 300 derives the quadratic penalty based on a covariance matrix of previous estimated parameters.
  • the system 300 codes each of the plurality of events based on one or more dynamic features of the corresponding event.
  • the one or more dynamic features may include, but or not limited to, one or more of peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, and steady state gain of the corresponding event.
  • the system 300 may extract the one or more dynamic features from a time series of active power, reactive power, voltage and frequency of the corresponding event.
  • the plurality of events may be each coded into a bit-string.
  • the plurality of events may also be coded into bit vectors.
  • the system 300 compares the plurality of binary vectors using the Taminoto coefficient. Then the system 300 discards similar subsequent events based on a similarity threshold and generates the plurality of unique events based on at least one remaining event.
  • the plurality of unique events includes at least a first event, a second event, and a third event.
  • the model includes a first set of parameters.
  • the system 300 executes the model using the first set of parameters and the first event to generate a first set of results.
  • the system 300 analyzes the first set of results to generate a second set of parameters.
  • the system 300 executes the model using the second set of parameters and the second event to generate a second set of results.
  • the system 300 analyzes the second set of results to generate a third set of parameters.
  • the system 300 executes the model using the third set of parameters and the third event to generate a third set of results.
  • the system 300 analyzes the third set of results to generate a fourth set of parameters.
  • the system 300 compares the first set of results, the second set of results, and the third set of results to determine the set of calibrated parameters 312 .
  • each set of the results includes residual error between the simulated response and the measured response for each of the one or more sensitive parameters.
  • the system 300 compares the plurality of residual errors to select the set of calibrated parameters with minimal overall residual error.
  • FIG. 15 illustrates a process 1500 for sequential calibration using the system architecture 300 (shown in FIG. 3 ).
  • the system 300 receives a plurality of events, such as events 314 , 316 , and 318 (shown in FIG. 3 ) and events 1502 , 1510 , and 1514 .
  • process 1500 is performed by one or more of the system architecture 300 , the processor 1310 , and the power system disturbance based model calibration engine 1314 (both shown in FIG. 3 ).
  • process 1500 receives initial parameters 1504 and choses a first event 1502 .
  • the first event 1502 is one of the received plurality of events. In other embodiments, the first event 1502 is a historical event or an event designated for testing purposes.
  • the first event 1502 and the initial parameters 1504 are used as inputs for a model validation and calibration (MVC) process 1506 , also known as MVC engine 1506 .
  • MVC process 1506 is similar to MVC 500 .
  • the first event 1502 includes at least the actual voltage, frequency, active power, reactive power for the event.
  • the MVC process 1506 generates a first updated set of parameters 1508 based on how the initial parameters 1504 matched up with the first event 1502 .
  • the MVC process 1506 uses the initial parameters 1504 and the voltage and frequency to predict the active and reactive power for the first event 1502 . Then the MVC process 1506 compares the predicted active and reactive power to the actual active and reactive power for the first event 1502 . The MVC process 1506 adjusts the initial parameters 1504 based on that comparison to generate an updated parameter set 1508 .
  • the first updated set of parameters 1508 are then used with a second event 1510 as inputs into the MVC process 1506 to generate a second updated set of parameters 1512 .
  • the second updated set of parameters 1512 and then used with a third event 1514 to be another set of inputs for the MVC process 1506 to generate a third updated set of parameters 1516 .
  • the process 1500 continues to serially analyze events to generate updated parameter sets. For example, if the process 1500 receives 25 events, then each event will be analyzed in order to determine updated parameters based on that event and MVC process 1506 , with the goal being that the parameters allow the MVC process 1506 to generate adjusted parameters to accurately predict the outcome of the plurality of events.
  • process 1500 allows for the parameters that affect each event to be analyzed, rather than have events that cancel out the effect of different parameters. For example, considering three different events, event- 1 , event- 2 , event- 3 , the sequential approach shown in process 1500 will generated three down-selected parameters subsets, say P-1, P-2 and P-3, corresponding to the three events. Each parameter subset is determined to be the best subset which can describe the corresponding event based on the parameter identifiability algorithm 525 . Then the parameter subset P-1, P-2, P-3 may be further used for the parameter estimation process 530 based on the corresponding event.
  • the parameter identifiability in a group calibration approach may not reach such an optimality.
  • the parameters for each of these events are analyzed overall for the entire set of events. In this way, the parameters for each event contribute to the final parameters and allow the system to find the ideal parameters for the entire set while still taking into account each individual event.
  • FIG. 16 is a data flow diagram illustrating a sub-section 1600 of the architecture system 300 (shown in FIG. 3 ) executing the sequential calibration process 1500 (shown in FIG. 15 ).
  • the system architecture 1600 receives network models 1602 , sub-system definitions 1604 , dynamic models 1606 , and event data 1608 at an input handling component 1610 .
  • input handling component 1610 includes the event screening component 302 (shown in FIG. 3 ).
  • Steady state network models 1602 can be either EMS or system planning models. In some embodiments, they may be in e-terra NETMOM or CIM13 format. Dynamic models 1606 can be in either PSS/E or PSLF or TSAT format. The system 1600 can also accept more than one dynamic data file when data is distributed among multiple files. In the exemplary embodiment, the network models 1602 and the dynamic models 1606 use the same naming convention for the network elements.
  • the sub-system definitions 1604 are based on the network model 1602 and one or more maps of the power plant.
  • a sub-system identification module combines the network model 1602 and the one or more maps to generate the sub-system definition 1604 .
  • the sub-system definition 1604 is provided via an XML file that defines the POI(s) and generators that makes up a power plant. Power plants are defined by generators in the plant with its corresponding POI(s). A few examples of power plant sub-system definitions are listed below in TABLE 1.
  • the system 1600 provides a user interface 1638 to facilitate defining the power plant starting from a potential POI.
  • Potential POIs are identified as terminals/buses in the system having all required measurements (V, f, P, Q) to perform model validation and calibration.
  • a measurement mapping module identifies terminals with V, f, P, Q measurements and lets the user search for radially connected generators starting from potential POIs.
  • Sub-system definitions 1604 may also be saved for future use. In some embodiments, a sub-system definition 1604 is defined for each event 1608 .
  • Events 1608 are where the voltage and/or the frequency of the power system changes.
  • an event 1608 may be a generator turning on.
  • the event 1608 has the same or similar attributes to a previous event 1608 , such as that same generator turning on, the event 1608 is skipped to reduce redundant processing.
  • the event data or Phasor data 1608 will be imported from a variety of sources, such as, but not limited to, e-terraphasorpoint, openPDC, CSV files, COMTRADE files and PI historian.
  • the POIs will have at least voltage, frequency, real power and reactive power measurements. In some embodiments, voltage angle is substituted for frequency.
  • the network models 1602 , sub-system definitions 1604 , dynamic models 1606 , and event data 1608 are analyzed by the system 1600 as described herein.
  • the model utilizes multiple disturbance events to validate and calibrate power system models for compliance with NERC mandated grid reliability requirements.
  • the user accesses the user interface 1638 to set the total number of events 1608 that will be used in process 1500 , set the stored file locations, and set the sequence that the events 1608 will be analyzed in.
  • system 1600 includes a set of initial parameters 1612 .
  • the set of initial parameters 1612 are based on the dynamic model 1606 .
  • the initial parameters 1612 and a first event 1614 are set as inputs and a model validation and calibration (MVC) 1616 is performed using those parameters 1612 and that first event 1614 .
  • the MVC 1616 is performed by the simulation engine 308 (shown in FIG. 3 ).
  • the MVC 1616 is associated with the MVC process 1506 (shown in FIG. 15 ) and/or the MVC process 500 (shown in FIG. 5 ).
  • the MVC 1616 generates a response 1618 , which includes statistics about how the initial parameters 1612 performed in matching up to the first event 1614 based on the MVC process 1506 .
  • the MVC 1616 also generates a first set of updated parameters 1620 based on the event's performance in the MVC process 1506 .
  • the MVC 1616 uses the initial parameters 1612 and the voltage and frequency of the first event 1614 to predict the active and reactive power for the first event 1614 . Then the MVC 1616 compares the predicted active and reactive power to the actual active and reactive power for the first event 1614 . The MVC 1616 adjusts the parameters 1612 into the first set of updated parameters 1620 based on that comparison and also uses the comparison to generate the first response 1618 .
  • the system 1600 uses the first set of updated parameters 1620 with the second event 1622 into the MVC process 1506 to generate a second updated set of parameters 1628 and a second response 1626 .
  • the second updated set of parameters 1626 is then used with a third event 1630 to be another set of inputs for the MVC process 1506 to generate a third updated set of parameters 1636 and a third response 1634 .
  • the system 1600 continues to serially analyze events 1608 to generate updated parameter sets. For example, if the system 1600 receives 25 events 1608 , then each event 1608 will be analyzed in order to determine updated parameters based on that event 1608 and the MVC process 1506 , with the goal being that the parameters allow the MVC process 1506 to generate adjusted parameters to accurately predict the outcome of the plurality of events.
  • the user may use the user interface 1638 to review the responses and the updated parameters. Furthermore, the user interface 1638 may allow the user to determine the order that the events 1608 are analyzed. In other embodiments, the system 1600 may serially analyze the events 1608 in a plurality of orders to determine the ideal set of updated parameters.
  • FIG. 17 is a data flow diagram illustrating the architecture system 300 (shown in FIG. 3 ) executing a parameter selection process 1700 in accordance with at least one embodiment.
  • parameter selection process 1700 is performed based on the results of process 1500 (shown in FIG. 15 ) and using architecture 1600 similar to that shown in FIG. 16 and/or architecture 300 similar to that shown in FIG. 3 .
  • process 1700 uses a model validation component 1704 .
  • model validation component 1704 is similar to model validation 535 (shown in FIG. 5 ) and includes Steps 505 - 515 (shown in FIG. 5 ). In this embodiment, the model validation component 1704 performs Steps 505 - 515 and generates a response based on the results.
  • the plurality of events 1608 are combined into an event set 1702 , which allows the model validation component 1704 to playback all of the events 1608 in the event set 1702 .
  • the model validation component 1704 analyzes a set of parameters, such as first set of parameters 1612 , based on all of the events 1608 in the event set 1702 to generate a first response 1706 .
  • the model validation component 1704 generates a means square error for each event 1608 and then combines the individual means square errors into a single means square error for the event set 1702 .
  • the means square error is provided in the first response 1706 . While means square error is described herein, one having skill in the art would understand that other methods of evaluating and ranking the parameter sets may be used.
  • the process 1700 further includes generating a second response 1708 where the first set of updated parameters 1620 are analyzed based on the event set 1702 .
  • a third response 1710 is generated based on the second set of updated parameters 1628 and a fourth response 1712 is generated based on a third set of updated parameters 1636 .
  • process 1700 analyzes that set of updated parameters compared to the event set 1702 .
  • the plurality of responses are then provided to a best result selection component 1714 .
  • the best result selection component 1714 compares the results for each set parameters to determine which is the optimal set of parameters to use for the model. In some embodiments, the best result selection component 1714 compares the mean square error provided in the results to the other results to determine which set of updated parameters to use. In other embodiments, the best result selection component 1714 compares the results to a threshold and when the results meet that threshold, the best result selection component 1714 choses the corresponding set of parameters. In some further embodiments, process 1700 is executed in parallel with process 1500 . In these embodiments, when an updated set of parameters is generated in process 1500 , then process 1700 is used to analyze those parameters. In some further embodiments, when the results meet the desired threshold, then process 1700 instructs process 1500 to end.
  • the parameter sets are analyzed serially. In other embodiments, the parameters sets are analyzed in parallel.
  • the parameter set selected by the best result selection component 1714 is transmitted to the user, such as in a dyd file, as the calibrated parameters 312 (shown in FIG. 3 ).
  • the process 1700 conducts model validation 1704 across each event 1608 .
  • the simulated response based on the calibrated parameter is compared with the measurement response. Then the best calibrated parameter is selected based on which leads to the minimal overall residual error between the simulated response and the measurement.
  • the first set of parameters are used to generate the three simulated response against each event 1 , 2 , 3 .
  • the residual or deviation of the simulated response and the measurement response for each event is r1, r2, r3.
  • the first step is repeated using the THIRD set of parameters.
  • the best result selection component 1714 selects the set of parameters with the minimal residual among all the residuals r10, r20, r30, r40.
  • At least one of the technical solutions to the technical problems provided by this system may include: (i) improved speed in modeling parameters; (ii) more robust models in response to measurement noise; (iii) compliance with NERC mandated grid reliability requirements; (iv) reduce the chance that an important parameter is not updated; (v) improved accuracy in parameter identifiability; (vi) improved accuracy in parameter estimation; and (vii) improved optimization of parameters based on event training.
  • the methods and systems described herein may be implemented using computer programming or engineering techniques including computer software, firmware, hardware, or any combination or subset thereof, wherein the technical effects may be achieved by performing at least one of the following steps: (a) store a model of the power system, wherein the model includes a plurality of events; (b) receive, from the at least one sensor, event data associated with an event of the power system; (c) analyze the event data to determine if the event is different from the plurality of events; (d) determine at least one parameters associated with the event; and (e) optimize the model to account for the event.
  • the computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein.
  • the methods may be implemented via one or more local or remote processors, transceivers, servers, and/or sensors, and/or via computer-executable instructions stored on non-transitory computer-readable media or medium.
  • computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein.
  • the computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium.
  • a processor or a processing element may employ artificial intelligence and/or be trained using supervised or unsupervised machine learning, and the machine learning program may employ a neural network, which may be a convolutional neural network, a deep learning neural network, or a combined learning module or program that learns in two or more fields or areas of interest.
  • Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs.
  • the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as image data, text data, report data, and/or numerical analysis.
  • the machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition, and may be trained after processing multiple examples.
  • the machine learning programs may include Bayesian program learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing—either individually or in combination.
  • BPL Bayesian program learning
  • voice recognition and synthesis voice recognition and synthesis
  • image or object recognition image or object recognition
  • optical character recognition optical character recognition
  • natural language processing either individually or in combination.
  • the machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or machine learning.
  • a processing element may be provided with example inputs and their associated outputs, and may seek to discover a general rule that maps inputs to outputs, so that when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output.
  • the processing element may be required to find its own structure in unlabeled example inputs.
  • machine learning techniques may be used to extract data about the computer device, the user of the computer device, the computer network hosting the computer device, services executing on the computer device, and/or other data.
  • the processing element may learn how to identify characteristics and patterns that may then be applied to training models, analyzing sensor data, and detecting abnormalities.
  • the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the disclosure.
  • the computer-readable media may be, for example, but is not limited to, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), and/or any transmitting/receiving medium, such as the Internet or other communication network or link.
  • the article of manufacture containing the computer code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network.
  • a processor may include any programmable system including systems using micro-controllers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein.
  • RISC reduced instruction set circuits
  • ASICs application specific integrated circuits
  • logic circuits and any other circuit or processor capable of executing the functions described herein.
  • the above examples are example only, and are thus not intended to limit in any way the definition and/or meaning of the term “processor.”
  • the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a processor, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory.
  • RAM random access memory
  • ROM memory read-only memory
  • EPROM memory erasable programmable read-only memory
  • EEPROM memory electrically erasable programmable read-only memory
  • NVRAM non-volatile RAM
  • a computer program is provided, and the program is embodied on a computer-readable medium.
  • the system is executed on a single computer system, without requiring a connection to a server computer.
  • the system is being run in a Windows® environment (Windows is a registered trademark of Microsoft Corporation, Redmond, Wash.).
  • the system is run on a mainframe environment and a UNIX® server environment (UNIX is a registered trademark of X/Open Company Limited located in Reading, Berkshire, United Kingdom).
  • the system is run on an iOS® environment (iOS is a registered trademark of Cisco Systems, Inc. located in San Jose, Calif.).
  • the system is run on a Mac OS® environment (Mac OS is a registered trademark of Apple Inc. located in Cupertino, Calif.). In still yet a further embodiment, the system is run on Android® OS (Android is a registered trademark of Google, Inc. of Mountain View, Calif.). In another embodiment, the system is run on Linux® OS (Linux is a registered trademark of Linus Torvalds of Boston, Mass.). The application is flexible and designed to run in various different environments without compromising any major functionality.
  • the system includes multiple components distributed among a plurality of computer devices.
  • One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium.
  • the systems and processes are not limited to the specific embodiments described herein.
  • components of each system and each process can be practiced independent and separate from other components and processes described herein.
  • Each component and process can also be used in combination with other assembly packages and processes.
  • the present embodiments may enhance the functionality and functioning of computers and/or computer systems.

Abstract

A system for sequential power system model calibration is provided. The system includes a computing device in communication with at least one sensor monitoring a power system. The computing device includes at least one processor in communication with at least one memory. The at least one processor is programmed to store a model of a device. The model includes a plurality of parameters. The at least one processor is also programmed to receive a plurality of events associated with the device, filter the plurality of events to generate a plurality of unique events, sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model, and update the model to include the set of calibrated parameters.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of priority to U.S. Provisional Patent Application No. 62/833,492, filed Apr. 12, 2019, entitled “SYSTEMS AND METHODS FOR SEQUENTIAL POWER SYSTEM MODEL PARAMETER ESTIMATION,” the entire contents and disclosure of which are incorporated by reference in its entirety.
  • BACKGROUND
  • The field of the invention relates generally to sequential power system model parameter estimation, and more particularly, to a system for modeling sequential power systems based on multiple events.
  • During 1996 Western System Coordinating Council (WSCC) blackout, the planning studies conducted using dynamic models had predicted stable system operation, whereas the real system became unstable in a few minutes with severe swings. To ensure the models represent the real system accurately, North American Electric Reliability Coordinator (NERC) requires generators above 20 MVA to be tested every 5 years or 10 years (depending on its interconnection) to check the accuracy of dynamic models and update the power plant dynamic models as necessary.
  • Some of the methods of performing calibration on the model include performing staged tests and direct measurement of disturbances. In a staged test, a generator is first taken offline from normal operation. While the generator is offline, the testing equipment is connected to the generator and its controllers to perform a series of predesigned tests to derive the desired model parameters. This method may cost $15,000-$35,000 per generator per test in the United States and includes both the cost of performing the test and the cost of taking the generator off-line. Phasor Measurement Units (PMUs) and Digital Fault Recorders (DFRs) have seen dramatic increasing installation in recent years, which allows for non-invasive model validation by using the sub-second-resolution dynamic data. Varying types of disturbances across locations in the power system along with large installed base of PMUs makes it possible to validate the dynamic models of the generators frequently at different operating conditions.
  • As more and more disturbances in power systems are being recorded by PMUs every day, the North American Electric Reliability Corporation (NERC) has pointed out that the analysis of multiple system events is beneficial for model calibration. A generator or load model built from one or two field tests may not be a good model, since it may overfit some specific event and lack the ability to fit the new, fresh measured load curves. Thus far, the primary questions in the community have been: what parameters to calibration, and how to calibrate. Accordingly, there exists a need for additional accuracy in model calibration.
  • BRIEF DESCRIPTION
  • In one aspect, a system for sequential power system model calibration is provided. The system includes a computing device including at least one processor in communication with at least one memory device. The at least one processor is programmed to store a model of a device. The model includes a plurality of parameters. The at least one processor is also programmed to receive a plurality of events associated with the device. The at least one processor is further programmed to filter the plurality of events to generate a plurality of unique events. In addition, the at least one processor is programmed to sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model. Moreover, the at least one processor is programmed to update the model to include the set of calibrated parameters.
  • In another aspect, a computer-implemented method for sequential power system model calibration is provided. The method is implemented by a computing device including at least one processor in communication with at least one memory device. The method includes storing a model of a device. The model includes a plurality of parameters. The method also includes receiving a plurality of events associated with the device. The method further includes filtering the plurality of events to generate a plurality of unique events. In addition, the method includes sequentially analyzing the plurality of unique events to determine a set of calibrated parameters for the model. Moreover, the method includes updating the model to include the set of calibrated parameters.
  • In a further aspect, a non-transitory computer-readable storage media having computer-executable instructions embodied thereon is provided. When executed by a computing device having at least one processor coupled to at least one memory device, the computer-executable instructions cause the processor to store a model of a device. The model includes a plurality of parameters. The computer-executable instructions also cause the processor to receive a plurality of events associated with the device. The computer-executable instructions further cause the processor to filter the plurality of events to generate a plurality of unique events. In addition, the computer-executable instructions cause the processor to sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model. Moreover, the computer-executable instructions cause the processor to update the model to include the set of calibrated parameters.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The Figures described below depict various aspects of the systems and methods disclosed therein. It should be understood that each Figure depicts an embodiment of a particular aspect of the disclosed systems and methods, and that each of the Figures is intended to accord with a possible embodiment thereof. Further, wherever possible, the following description refers to the reference numerals included in the following Figures, in which features depicted in multiple Figures are designated with consistent reference numerals.
  • There are shown in the drawings arrangements which are presently discussed, it being understood, however, that the present embodiments are not limited to the precise arrangements and are instrumentalities shown, wherein:
  • FIG. 1 illustrates a block diagram of a power distribution grid.
  • FIG. 2 illustrates a high-level block diagram of a system for performing sequential calibration in accordance with some embodiments.
  • FIG. 3 illustrates a block diagram of an exemplary system architecture for sequential calibration, in accordance with one embodiment of the disclosure.
  • FIGS. 4A and 4B illustrate examples of screening sequential events using fingerprinting.
  • FIG. 4C illustrates a graph of null spaces from multiple events.
  • FIG. 5 illustrates a process for power system model parameter conditioning in accordance with some embodiments.
  • FIG. 6 is a diagram illustrating a model calibration algorithm in accordance with some embodiments.
  • FIG. 7 is a table illustrating a comparison of events calibrated for one event and all other events.
  • FIG. 8 is a table illustrating the use of the sequential estimation approach for a gas plant.
  • FIG. 9 is a table illustrating the use of the sequential estimation approach for a hydro plant.
  • FIGS. 10A and 10B illustrate a process for identifying and estimating parameters in accordance with at least one embodiment.
  • FIG. 11 is a diagram illustrating candidate parameter estimation algorithms in accordance with some embodiments.
  • FIG. 12 illustrates a two-stage approach of the process for model calibration.
  • FIG. 13 is a diagram illustrating an exemplary apparatus or platform according to some embodiments.
  • FIG. 14 is a diagram illustrating a method of performing model calibration using multiple disturbance events in accordance with at least one embodiment.
  • FIG. 15 illustrates a process for sequential calibration using the system architecture shown in FIG. 3.
  • FIG. 16 is a data flow diagram illustrating the architecture system shown in FIG. 3 executing the sequential calibration process shown in FIG. 15.
  • FIG. 17 is a data flow diagram illustrating the architecture system shown in FIG. 3 executing a parameter selection process in accordance with at least one embodiment.
  • DETAILED DESCRIPTION
  • In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments. However, it will be understood by those of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the embodiments.
  • One or more specific embodiments are described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
  • A traditional simulation engine relies on differential algebraic equations (DAEs) therein to perform simulations. For example, the simulation engine may include dozens, hundreds, and the like, for a single component on the power grid. Because of the amount of different equations in the simulation engine software to represent the power system (generator, transformer, load), performance of a simulation is slow. Furthermore, the simulation engine has a non-linear response, it is not easy to automatically extract analytical gradient information which is what is needed for optimization. One simulation is the equivalent of a Jacobian Matrix Calculation which can include 200 iterations or more. Each iteration can take a minute or more. Meaning that for one simulation, the simulation engine can require at least 200 minutes of time.
  • Typically a dynamic simulation engine is used to facilitate both identifiability of parameters (in total) and determination of parameters for calibration. Given field data with time stamped voltage (V) and frequency (f), the simulation engine will provide the simulated active power ({circumflex over (P)}) and reactive ({circumflex over (Q)}) with the same timestamp. Parameter identification involves multiple calls of simulation engines with parameter perturbation to determine the best choice of a subset of the parameters for tuning (calibration). Calibration involves multiple calls of the simulation engine to search for the best value for the given subset of parameters determined in the identifiability step.
  • The example embodiments provide a predictive model which can be used to replace the dynamic simulation engine when performing the parameter identification and the parameter calibration. This is described in U.S. patent application Ser. No. 15/794,769, filed 26 Oct. 2017, the contents of which are incorporated in their entirety. The model can be trained based on historical behavior of a dynamic simulation engine thereby learning patterns between inputs and outputs of the dynamic simulation engine. The model can emulate the functionality performed by the dynamic simulation engine without having to perform numerous rounds of simulation. Instead, the model can predict (e.g., via a neural network, or the like) a subset of parameters for model calibration and also predict/estimate optimal parameter values for the subset of parameters in association with a power system model that is being calibrated. According to the examples herein, the model may be used to capture both input-output function and first derivative of a dynamic simulation engine used for model calibration. The model may be updated based on its confidence level and prediction deviation against the original simulation engine. Here, the model may be a surrogate for a dynamic simulation engine and may be used to perform model calibration without using DAE equations. The system described herein may be a model parameter tuning engine, which is configured to receive the power system data and model calibration command, and search for the optimal model parameters using the surrogate model until the closeness between simulated response and the real response from the power system data meet a predefined threshold. In the embodiments described herein, the model operates on disturbance event data that includes one or more of device terminal real power, reactive power, voltage magnitude, and phase angle data. The model calibration may be triggered by user or by automatic model validation step. In some aspects, the model may be trained offline when there is no grid event calibration task. The model may represent a set of different models used for different kinds of events. In some embodiments, the model's input may include at least one of voltage, frequency and other model tunable parameters. The model may be a neural network model, fuzzy logic, a polynomial function, and the like. Other model tunable parameters may include a parameter affecting dynamic behavior of machine, exciter, stabilizer and governor. Also, the surrogate model's output may include active power, reactive power or both. In some cases, the optimizer may be gradient based method including Newton-like methods. For example, the optimizer may be gradient free method including pattern search, genetic algorithm, simulated annealing, particle swarm optimizer, differential evolution, and the like.
  • In the exemplary embodiment disclosed herein with respect to FIG. 3 below, the model utilizes multiple disturbance events to validate and calibrate power system models for compliance with NERC mandated grid reliability requirements. The Sequential model calibration system described herein comprises three steps. The first step is Sequential event selection. This uses a similarity-based screening approach, where the event's dynamic feature is coded as bit-string. The system considers not only active power (P) and reactive power (Q) but also voltage (U) and frequency (F). In some embodiments, the Tanimoto coefficient is used for similarity metrics. The second step is Sequential parameter identifiability, which includes selecting the most sensitive parameter subset based on increasing events. The third step includes Bayesian optimization. This includes determining the new parameter values by considering the deviation from previous parameter estimates. The weight for the penalty is derived from a Bayesian argument.
  • FIG. 1 illustrates a power distribution grid 100. The grid 100 includes a number of components, such as power generators 110. In some cases, planning studies conducted using dynamic models predict stable grid 100 operation, but the actual grid 100 may become unstable in a few minutes with severe swings (resulting in a massive blackout). To ensure that the models represent the real system accurately, the North American Electric Reliability Coordinator (“NERC”) requires generators 110 above 10 MVA to be tested every five years to check the accuracy of dynamic models and let the power plant dynamic models be updated as necessary. The systems described herein consider not only active power (P) and reactive power (Q) but also voltage (U) and frequency (F).
  • In a typical staged test, a generator 110 is first taken offline from normal operation. While the generator 110 is offline, testing equipment is connected to the generator 110 and its controllers to perform a series of pre-designed tests to derive the desired model parameters. Recently, PMUs 120 and Digital Fault Recorders (“DFRs”) 130 have seen dramatic increasing installation in recent years, which may allow for non-invasive model validation by using the sub-second-resolution dynamic data. Varying types of disturbances across locations in the grid 100 along with the large installed base of PMUs 120 may, according to some embodiments, make it possible to validate the dynamic models of the generators 110 frequently at different operating conditions. There is a need for a production-grade software tool generic enough to be applicable to wide variety of models (traditional generating plant, wind, solar, dynamic load, etc. with minimal changes to existing simulation engines. Note that model calibration is a process that seek multiple (dozens or hundreds) of model parameters, which could suffer from local minimum and multiple solutions. There is need for an algorithm to enhance the quality of a solution within a reasonable amount time and computation burdens.
  • Online performance monitoring of power plants using synchrophasor data or other high-resolution disturbance monitoring data acts as a recurring test to ensure that the modeled response to system events matches actual response of the power plant or generating unit. From the Generator Owner (GO)'s perspective, online verification using high resolution measurement data can provide evidence of compliance by demonstrating the validity of the model by online measurement. Therefore, it is a cost-effective approach for GO as they may not have to take the unit offline for testing of model parameters. Online performance monitoring requires that disturbance monitoring equipment such as a PMU be located at the terminals of an individual generator or Point of Interconnection (POI) of a power plant.
  • The disturbance recorded by PMU normally consists of four variables: voltage, frequency, active power and reactive power. To use the PMU data for model validation, the play in or playback simulation has been developed and they are now available in all major grid simulators. The simulated output including active power and reactive power will be generated and can be further compared with the measured active power and reactive power.
  • To achieve such results, FIG. 2 is a high-level block diagram of a system 200 in accordance with some embodiments. The system 200 includes one or more measurement units 210 (e.g., PMUs, DFRs, or other devices to measure frequency, voltage, current, or power phasors) that store information into a measurement data store 220. As used herein, the term “PMU” might refer to, for example, a device used to estimate the magnitude and phase angle of an electrical phasor quantity like voltage or current in an electricity grid using a common time source for synchronization. The term “DFR” might refer to, for example, an Intelligent Electronic Device (“IED”) that can be installed in a remote location, and acts as a termination point for field contacts. According to some embodiments, the measurement data might be associated with disturbance event data and/or data from deliberately performed unit tests. According to some embodiments, a model parameter tuning engine 250 may access this data and use it to tune parameters for a dynamic system model 260. The process might be performed automatically or be initiated via a calibration command from a remote operator interface device 290. As used herein, the term “automatically” may refer to, for example, actions that can be performed with little or no human intervention.
  • Note that power systems may be designed and operated using mathematical models (power system models) that characterize the expected behavior of power plants, grid elements, and the grid as a whole. These models support decisions about what types of equipment to invest in, where to put it, and how to use it in second-to-second, minute-to-minute, hourly, daily, and long-term operations. When a generator, load, or other element of the system does not act in the way that its model predicts, the mismatch between reality and model-based expectations can degrade reliability and efficiency. Inaccurate models have contributed to a number of major North American power outages.
  • The behavior of power plants and electric grids may change over time and should be checked and updated to assure that they remain accurate. Engineers use the processes of validation and calibration to make sure that a model can accurately predict the behavior of the modeled object. Validation assures that the model accurately represents the operation of the real system—including model structure, correct assumptions, and that the output matches actual events. Once the model is validated, a calibration process may be used to make minor adjustments to the model and its parameters so that the model continues to provide accurate outputs. High-speed, time-synchronized data, collected using PMUs may facilitate model validation of the dynamic response to grid events. Grid operators may use, for example, PMU data recorded during normal plant operations and grid events to validate grid and power plant models quickly and at lower cost.
  • The transmission operators or Regional reliability coordinators, or Independent System Operators, like MISO, ISO-New England, PG&E, can use this calibrated generator or power system model for power system stability study based on N-k contingencies, in every 5 to 10 minutes. If there is stability issue (transient stability) for some specific contingency, the power flow will be redirected to relieve the stress-limiting factors. For example, the output of some power generators will be adjusted to redirect the power flow. Alternatively, adding more capacity (more power lines) to the existing system can be used to increase the transmission capacity.
  • With a model that accurately reflects oscillations and their causes, the grid operator can also diagnose the causes of operating events, such as wind-driven oscillations, and identify appropriate corrective measures before those oscillations spread to harm other assets or cause a loss of load.
  • As used herein, devices, including those associated with the system 200 and any other device described herein, may exchange information via any communication network which may be one or more of a Local Area Network (“LAN”), a Metropolitan Area Network (“MAN”), a Wide Area Network (“WAN”), a proprietary network, a Public Switched Telephone Network (“PSTN”), a Wireless Application Protocol (“WAP”) network, a Bluetooth network, a wireless LAN network, and/or an Internet Protocol (“IP”) network such as the Internet, an intranet, or an extranet. Note that any devices described herein may communicate via one or more such communication networks.
  • The model parameter tuning engine 250 may store information into and/or retrieve information from various data stores, which may be locally stored or reside remote from the model parameter tuning engine 250. Although a single model parameter tuning engine 250 is shown in FIG. 2, any number of such devices may be included. Moreover, various devices described herein might be combined according to embodiments of the present invention. For example, in some embodiments, the measurement data store 220 and the model parameter tuning engine 250 might comprise a single apparatus. The system 200 functions may be performed by a constellation of networked apparatuses, such as in a distributed processing or cloud-based architecture.
  • A user may access the system 200 via the device 290 (e.g., a Personal Computer (“PC”), tablet, or smartphone) to view information about and/or manage operational information in accordance with any of the embodiments described herein. In some cases, an interactive graphical user interface display may let an operator or administrator define and/or adjust certain parameters (e.g., when a new electrical power grid component is calibrated) and/or provide or receive automatically generated recommendations or results from the system 200.
  • The example embodiments provide a predictive model which can be used to replace the dynamic simulation engine when performing the parameter identification and the parameter calibration. This is described in U.S. patent application Ser. No. 15/794,769, filed 26 Oct. 2017, the contents of which are incorporated in their entirety. The model can be trained based on historical behavior of a dynamic simulation engine thereby learning patterns between inputs and outputs of the dynamic simulation engine. The model can emulate the functionality performed by the dynamic simulation engine without having to perform numerous rounds of simulation. Instead, the model can predict (e.g., via a neural network, or the like) a subset of parameters for model calibration and also predict/estimate optimal parameter values for the subset of parameters in association with a power system model that is being calibrated. According to the examples herein, the model may be used to capture both input-output function and first derivative of a dynamic simulation engine used for model calibration. The model may be updated based on its confidence level and prediction deviation against the original simulation engine.
  • Here, the model may be a surrogate for a dynamic simulation engine and may be used to perform model calibration without using DAE equations. The system described herein may be a model parameter tuning engine, which is configured to receive the power system data and model calibration command, and search for the optimal model parameters using the surrogate model until the closeness between simulated response and the real response from the power system data meet a predefined threshold. In the embodiments described herein, the model operates on disturbance event data that includes one or more of device terminal real power, reactive power, voltage magnitude, and phase angle data. The model calibration may be triggered by user or by automatic model validation step. In some aspects, the model may be trained offline when there is no grid event calibration task. The model may represent a set of different models used for different kinds of events. In some embodiments, the model's input may include at least one of voltage, frequency and other model tunable parameters. The model may be a neural network model, fuzzy logic, a polynomial function, and the like. Other model tunable parameters may include a parameter affecting dynamic behavior of machine, exciter, stabilizer and governor. Also, the surrogate model's output may include active power, reactive power or both. In some cases, the optimizer may be gradient based method including Newton-like methods. For example, the optimizer may be gradient free method including pattern search, genetic algorithm, simulated annealing, particle swarm optimizer, differential evolution, and the like.
  • FIG. 3 illustrates a block diagram of an exemplary system architecture 300 for sequential calibration, in accordance with one embodiment of the disclosure. In the exemplary embodiment, the system 300 receives a plurality of events 314, 316, and 318 sequentially. The events 314, 316, and 318 are received by the event screening component 302, which screens which events 314, 316, and 318 are to be analyzed. Events 314, 316, and 318 are where the voltage and/or the frequency of the power system changes. For each event 314, 316, and 318, the event screening component 302 determines whether the event 314, 316, and 318 is novel enough. For example, an event 316 may be a generator turning on. If the event 316 has the same or similar attributes to a previous event 314, such as that same generator turning on, then the event screening component 302 skips this event 316. In the exemplary embodiment, the event screening component 302 compares the event 316 to those events stored in a database 310. If the event 316 is novel enough, then the event 316 is stored in the database 310. Then the event 316 is sent to the parameter identifiability component 304. This component 304 analyzes the event 316 in combination with past events and the parameters identified as significant with those events to determine which parameters are significant for this event 316. Then the tunable parameters are transmitted to the Bayesian Optimization component 306, which further analyzes the significant parameters to calibrate the parameters in the model being executed by the simulation engine 308 to get a final set of calibrated parameters 312. The steps in this process are further described below.
  • The first step is the sequential event screening. In the exemplary embodiment, the goal is to screen only the representative or most characteristic events among all events so as to get a faster calculation and avoid overfitting to some specific events. The underlying assumption is that similar input/output (IO) curve features lead to similar dynamics, which in turn leads to similar dynamic parameters.
  • One approach is to compute a bit-string encoding representative information about the event and use the similarity between the bit-strings as a measure of the similarity between the events. As used herein, a bit-string for an event is similar as the fingerprint in medicine molecules analysis. The fingerprint comprises of a long string with each bit set to either zero or one. Each bit in the fingerprint corresponds to feature of the event, and that bit will be set or not, according to whether the given event has the feature.
  • The feature of event may comprise peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, steady state gain (P, Q, u, f), of the event. The feature is extracted from the time series of active power, reactive power, voltage and frequency.
  • One might also represent an event by a counting vector of integers, where each integer counts how many times a certain feature occurs in the molecular. A counting vector allows for a more detailed description of the event as a multi-set of features, where a binary fingerprint as introduced above simply describes the event as a set of features. However, as counting vectors can easily be converted into binary vectors. An example of the counting vectors and binary vectors may be seen in FIG. 4A.
  • One way to quantify the similarity between two sets (or multi-sets) of features is the Tanimoto coefficient. If A and B are sets, or multi-sets, of features, then the Tanimoto coefficient, ST(A, B), is:

  • S t(A, B)=|A∩B|/|A∪B|  Equation 1
  • If A and B are given as two bit-strings, then the Tanimoto coefficient becomes:

  • S t(A, B)=|A∧B|/|A∨B|  Equation 2
  • where ∧ and ∨ are bitwise logical ‘and’ and logical ‘or’ respectively, and |A| is the number of bits set to one in the bit-string A. See below FIG. 4B for an example.
  • The Tanimoto coefficient as defined above quantifies the similarity between two bit-strings as a number in the interval [0;1], where 0 says that the two bit-strings have no one-bits in common, and 1 says that the two bit-strings are equal.
  • Given a new event A and an existing selected event database B, the event screening process is as follows:
  • If ST (A, B)<Similarity threshold,
  • Then the event A is selected;
  • otherwise, the event A is discarded.
  • The second step is the Sequential parameter identifiability. The goal of this step is to perform a comprehensive identifiability study across multiple events and provide an identifiable parameter set for the simultaneous calibration which tunes the most identifiable parameters to match the measurement of multiple events simultaneously.
  • The algorithm first generates the trajectory sensitivity matrices for all the selected disturbances by perturbing each parameter and feeding the perturbed parameter values to a playback simulation platform. Then the algorithm provides two options depending on the number of disturbances being considered. If the number of disturbances is large enough that the union of null spaces of all the disturbances has a rank higher than parameter number, the algorithm solves an optimization problem to find a solution that has the minimum total distance to all the null spaces. Such a solution gives a comprehensive identifiability ranking of parameters across disturbances. If the number of disturbances is small, the second option will be taken, which evaluates the identifiability of parameters for each disturbance, then calculates the average identifiability ranking across disturbances. Since the sensitivity studies are conducted at the parameters' default values, the conditioning tool also performs a global sensitivity consistency study when the parameters' values deviate far away from their default values. Such study portraits the geometry of the parameter sensitivity in the entire parameter space.
  • When N events are considered, applying singular value decomposition (SVD) to the sensitivity trajectory matrices result in N null spaces. The null space for one event also can be interpreted as a system of homogeneous algebraic equations with parameter sensitivities being the unknowns. Since the null space from one event has a rank lower than the number of parameters, the number of equations is less than the number of unknowns. Considering more events is equivalent to adding more equations to the system. After the event number exceeds certain value (also the characteristic of events should be diverse), the system would have more equations than unknowns. (In practice, the numerical rank should be greater than number of unknowns.) The solution that minimizes the difference between the left and right hand of the equation system represents the comprehensive sensitivity magnitude of all parameters across all the considered events. For sensitivity dependency, accounting for the null spaces of all considered events, a comprehensive dependency index can also be calculated.
  • FIG. 4C demonstrates how to use null spaces from multiple events. The three lines shown in FIG. 4C correspond to the vectors that space the null space of the sensitivity matrix for multiple events. The point that is nearest in distance to all three lines represents the relationship among the dependent parameters satisfied across all of those events. The sensitivity magnitude vector Msen∈RN p ×1 of all parameters is the solution of the following optimization problem:

  • minΣj=1 N event Σi=1 N null νi,j T M sen  Equation 3
  • Where, Nevent is the number of events, Nnull is the size of null space of the sensitivity matrix. In some embodiments, the above optimization problem may be solved using a standard Linear Programming approach.
  • The third step is the Bayesian Optimization. Since grid disturbances occur intermittently, the user of the calibration tool may be required to re-calibrate model parameters in a sequential manner as new disturbances come in. In this scenario, the user has a model that was calibrated to some observed grid disturbances to start with, and observes a larger that acceptable mismatch with a newly encountered disturbance. The task now is to tweak the model parameters so that the model explains the new disturbance without detrimentally affecting the match with earlier disturbances. On potential solution is to run calibration simultaneously on all events of interest strung together; however, this comes at the cost of significant computational expense and engineering involved in enabling running a batch of events simultaneously. One more efficient method may be to carry some essential information from the earlier calibrations runs and guide the subsequent calibration run that helps explain the new disturbance without losing earlier calibration matches.
  • In the exemplary embodiment, the framework of Bayesian estimation may be used to develop a sequential estimation capability into the existing calibration framework. The true posterior distribution of parameters (assuming Gaussian priors) after the calibration process may be quite complicated due to the nonlinearity of the models. One approach in sequential estimation is to consider a Gaussian approximation of this posterior as is done in Kalman filtering approaches to sequential nonlinear estimation. In a nonlinear least squares approach, this simplifies down to a quadratic penalty term for deviations from the previous estimates, and the weights for this quadratic penalty come from a Bayesian argument.
  • min t = 1 T w t * ( y t t - y t ( x ) y b a s e ) + ( x - x m e a n ) T * ( Σ b k ) - 1 * ( x - x m e a n )
  • The measured values of P and Q may be represented by a simulated value plus an error term.

  • y i =y(x i |b)+e i

  • Σb kb k−1 +J T *J
  • In some embodiments, the errors may be subject to Normal distribution, either independently or else with errors correlated in some known way, such as, but not limited to, multivariate Normal distribution.

  • ei˜N(0,σi)

  • e˜N(0,Σ)
  • The above may be used to find the parameters of a model b from the data.
  • P ( b | { y i } ) P ( { y i } | b ) P ( b ) Π i exp [ - 1 2 ( y i y ( x i | b ) σ i ) 2 ] P ( b ) exp [ - 1 2 i ( y i y ( x i | b ) σ i ) 2 ] P ( b ) exp [ - 1 2 x 2 ( b ) ] P ( b )
  • Alternatively, the parameter value b0 that minimizes x2 may be calculated using a Taylor series approximation.
  • - 1 2 x 2 ( b ) - 1 2 x min 2 - 1 2 ( b - b 0 ) T [ 1 2 2 x 2 b b ] ( b - b 0 ) P ( B | { y i } ) exp [ - 1 2 ( b - b 0 ) T Σ b - 1 ( b - b 0 ) ] P ( b ) Σ b = [ 1 2 2 x 2 b b ] - 1
  • where Σb is the covariance of “standard error” matrix of the fitted parameters.
  • FIG. 5 is a process 500 for power system model parameter conditioning according to some embodiments. At Step 505, disturbance data may be obtained (e.g., from a PMU or DFR) to obtain, for example, V, f, P, and Q measurement data at a Point Of Interest (“POI”). At Step 510, a playback simulation may run load model benchmarking using default model parameters (e.g., associated with a Positive Sequence Load Flow (“PSLF”) or Transient Security Assessment Tool (“TSAT”)). At Step 515, model validation may compare measurements to default model response. If the response matches the measurements, the framework may end (e.g., the existing model is sufficiently correct and does not need to be updated). At Step 520, an event analysis algorithm may determine if event is qualitatively different from previous events. At Step 525, a parameter identifiability analysis algorithm may determine most identifiable set of parameters across all events of interest. For example, a first event may have 90 to 100 parameters. For that event, Step 525 uses the parameter identifiability algorithm to select 1 to 20 of those parameters.
  • Finally, at Step 530 an Unscented Kalman Filter (“UKF”)/optimization-based parameter estimation algorithm/process may be performed. As a result, the estimated parameter values, confidence metrics, and error in model response (aa compared to measurements) may be reported. In some embodiments, Steps 505-515 are considered model validation 535 and Steps 520-530 are considered model calibration 540. As described elsewhere herein, the systems may use one or both of model validation 535 and model calibration 540. In some embodiments, Steps 505-530 are considered a model validation and calibration (MVC) process 500.
  • Disturbance data may be monitored by one or more PMUs coupled to an electrical power distribution grid may be received. The disturbance data can include voltage (“V”), frequency (“f”), and/or active and nonactive reactive (“P” and “Q”) power measurements from one or more points of interest (POI) on the electrical power grid. A power system model may include model parameters. These model parameters can be the current parameters incorporated in the power system model. The current parameters can be stored in a model parameter record. Model calibration involves identifying a subset of parameters that can be “tuned” and modifying/adjusting the parameters such that the power system model behaves identically or almost identically to the actual power component being represented by the power system model.
  • In accordance with some embodiments, the model calibration can implement model calibration with three functionalities. The first functionality is an event screening tool to select characteristics of a disturbance event from a library of recorded event data. This functionality can simulate the power system responses when the power system is subjected to different disturbances. The second functionality is a parameter identifiability study. When implementing this functionality, the can simulate the response(s) of a power system model. The third functionality is simultaneous tuning of models using event data to adjust the identified model parameters. According to various embodiments, the second functionality (parameter identifiability) and the third functionality (tuning of model parameters) may be done using a surrogate model in place of a dynamic simulation engine.
  • Event screening can be implemented during the simulation to provide computational efficiency. If hundreds of events are stitched together and fed into the calibration algorithm unselectively, the algorithm may not be able to converge. To maintain the number of events manageable and still keep an acceptable representation of all the events, a screening procedure may be performed to select the most characteristic events among all. Depending on the type of events, the measurement data could have different characteristics. For example, if an event is a local oscillation, the oscillation frequency in the measurement data would be much faster as compared to an inter-area oscillation event. In some implementations, a K-medoids clustering algorithm can be utilized to group events with similar characteristic together, thus reducing the number of events to be calibrated.
  • Instead of using the time consuming simulation engine, the surrogate model or models (such as Neural Networks) with equivalent function of dynamic simulation engine, may be used for both identifiability and calibration. The surrogate model may be built offline while there is no request for model calibration. Once built, the surrogate model comprising a set of weights and bias in learned structure of network will be used to predict the active power ({circumflex over (P)}) and reactive ({circumflex over (Q)}) given different set of parameters together with time stamped voltage (V) and frequency (f).
  • The parameter identifiability analysis addresses two aspects: (a) magnitude of sensitivity of output to parameter change; and (b) dependencies among different parameter sensitivities. For example, if the sensitivity magnitude of a particular parameter is low, the parameter would appear in a row being close to zero in the parameter estimation problem's Jacobian matrix. Also, if some of the parameter sensitivities have dependencies, it reflects that there is a linear dependence among the corresponding rows of the Jacobian. Both these scenarios lead to singularity of the Jacobian matrix, making the estimation problem infeasible. Therefore, it may be important to select a subset of parameters which are highly sensitive as well as result in no dependencies among parameter sensitivities. Once the subset of parameters is identified, values in the active power system model for the parameters may be updated, and the system may generate a report and/or display of the estimated parameter values(s), confidence metrics, and the model error response as compared to measured data.
  • FIG. 6 illustrates a model calibration algorithm that can be used by the model calibration algorithm component in accordance with some embodiments. Here, the model calibration algorithm attempts to find a parameter value (θ*) for a parameter (or parameters) of the power system model that creates a matching output between the simulated active power ({circumflex over (P)}) and the simulated reactive power ({circumflex over (Q)}) predicted by the model with respect to the actual active power (P) and actual reactive power (Q) of the component on the electrical grid.
  • As grid disturbances occur intermittently, the user of the calibration tool described herein may be required to re-calibrate model parameters in a sequential manner as new disturbances come in. In this scenario, the user has a model that was calibrated to some observed grid disturbances to start with, and observes a larger that acceptable mismatch with a newly encountered disturbance. The task now is to tweak the model parameters so that the model explains the new disturbance without detrimentally affecting the match with earlier disturbances. One solution would be to run calibration simultaneously on all events of interest strung together but this comes at the cost of significant computational expense and engineering involved in enabling running a batch of events simultaneously. It would be far more preferable to carry some essential information from the earlier calibrations runs and guide the subsequent calibration run that helps explain the new disturbance without losing earlier calibration matches.
  • In the exemplary embodiment, the framework of Bayesian estimation may be used to develop a sequential estimation capability into the existing calibration framework. The true posterior distribution of parameters (assuming Gaussian priors) after the calibration process can be quite complicated due to the nonlinearity of the models. The typical approach in sequential estimation is to consider a Gaussian approximation of this posterior as is done in Kalman filtering approaches to sequential nonlinear estimation. In our nonlinear least squares approach, this boils down to a quadratic penalty term for deviations from the previous estimates, and the weights for this quadratic penalty come from a Bayesian argument. FIG. 7 displays example results of the performance (in root mean square [r.m.s.] terms) of events calibrated for only one event (in corresponding column) evaluated against all other events (listed in the rows). FIG. 8 displays example results of the sequential estimation module being implemented and tested with a test data set for a gas plant. FIG. 9 displays example results of the sequential estimation module being implemented and tested with a test data set for a hydro plant.
  • FIG. 7 shows example results without sequential estimation. In this example the calibration algorithm was executed from scratch for each of the 12 events for the gas plant case and obtain 12 sets of calibrated parameters. Then a model validation exercise is executed the model response resulting from each of these 12 sets of calibrated parameters for each of the 12 events is compared. The root mean square (r.m.s) errors between measured and simulated real and reactive (P and Q resp.) power responses are shown in FIG. 7. As shown in FIG. 7, there are a lot of “reds” in this table, which mean that if the model is tuned to only one event (irrespective of the event), one cannot expect it to necessarily explain all other events. This motivates the need for sequential estimation.
  • FIGS. 8 and 9 illustrate the mismatch in model response for each event for the model parameters obtained at the end of sequential estimation for gas plant and hydro plant, respectively. Sequential estimation refers to calibrating the model one event at a time, sequentially, while carrying forward some information from the previous runs as described earlier. The example results shown in FIGS. 8 and 9 show a marked improvement over the results shown in FIG. 7 as the parameters at the end of the sequential run are able to explain most events better than the default set. For example, there are two columns in each of FIGS. 8 and 9; the one labelled ‘sequential, forgetting factor=1e−2’ is the main sequential result while the one marked ‘sequential without prior’ is a case where the only sequential aspect is the passing of the last estimated value as the initial guess for the next run (i.e. the prior weight is set to zero). As shown in the figures, the performance of this approach also appears competitive and in some instances better that the main sequential approach. While this can be explained in the noise-free case evaluated here, the sequential approach is expected to perform more robustly in the presence of noise. Finally, the last two rows compare the estimated parameter set with the true parameter set in terms of normalized 2-norm and infinity-norm. The performance between the two options is inconclusive between the two cases (but slightly in the favor of the main sequential approach).
  • FIGS. 10A and 10B illustrate a process for identifying and estimating parameters in accordance with at least one embodiment. The raw parameters are analyzed to be identified. Some of the parameters are then down selected, which then leads to the parameter estimation.
  • FIG. 11 illustrates candidate parameter estimation algorithms 1100 according to some embodiments. In one approach 1120, measured input/output data 1110 (u, ym) may be used by a power system component model 1122 and an UKF based approach 1124 to create an estimation parameter (p*) 1140.
  • In particular, the system may compute sigma points based on covariance and standard deviation information. The Kalman Gain matrix K may be computed based on Ŷ and the parameters may be updated based on:

  • p k =p k−1 +K(y m −ŷ)
  • until pk converges. According to another approach 1130, the measured input/output data 1110 (u, ym) may be used by a power system component model 1132 and an optimization-based approach 1134 to create the estimation parameter (p*) 1140. In this case, the following optimization problem may be solved:
  • min p y m - Y ^ ( p ) 2
  • The system may then compute output as compared to parameter Jacobian information and iteratively solve the above optimization problem by moving parameters in directions indicated by the Jacobian information.
  • FIG. 12 illustrates a two-stage approach of the process for model calibration. In this approach, PMU data from events is fed into a dynamic simulation engine. The dynamic simulation engine communicates with a parameter identifiability analysis component and returns the changes to the parameters. The parameter identifiability analysis component also transmits a set of identifiable parameters to a model calibration algorithm component. The model calibration algorithm component uses the set of identifiable parameters, PMU data from events, and other data from the dynamic simulation engine to generate estimated parameters. This approach may be used to calibrate the tuning model parameters.
  • With the playback simulation capability, the user can compare the response (active power and reactive power) of system models with dynamics observed during disturbances in the system, which is called model validation. The grid disturbance (aka. events) can also be used to correct the system model when simulated response is significantly different from the measured values, which is called model calibration. As shown in the right side of the FIG. 12, the goal is to achieve a satisfactory match between the measurement data and simulated response. If obvious a discrepancy is observed, then the model calibration process may be employed.
  • The first step of the model calibration process is parameter identification, which aims to identify a subset of parameters with strong sensitivity to the observed event. In the exemplary embodiment, the model calibration process requires a balance on matching in measurement space and reasonableness in the model parameter space. Numerical curve fitting without adequate engineering guidance tends to provide overfitted parameter results, and leads to non-unique sets of parameters (leading to same curve fitting performance), which should be avoided.
  • The embodiments described herein may also be implemented using any number of different hardware configurations. For example, FIG. 13 is a block diagram of an apparatus or platform 1300 that may be, for example, associated with the system 200 of FIG. 2 and/or any other system described herein. The platform 1300 comprises a processor 1310, such as one or more commercially available Central Processing Units (“CPUs”) in the form of one-chip microprocessors, coupled to a communication device 1320 configured to communicate via a communication network (not shown in FIG. 13). The communication device 1320 may be used to communicate, for example, with one or more remote measurement units, components, user interfaces, etc. The platform 1300 further includes an input device 1340 (e.g., a computer mouse and/or keyboard to input power grid and/or modeling information) and/an output device 1350 (e.g., a computer monitor to render a display, provide alerts, transmit recommendations, and/or create reports). According to some embodiments, a mobile device, monitoring physical system, and/or PC may be used to exchange information with the platform 1300.
  • The processor 1310 also communicates with a storage device 1330. The storage device 1330 may comprise any appropriate information storage device, including combinations of magnetic storage devices (e.g., a hard disk drive), optical storage devices, mobile telephones, and/or semiconductor memory devices. The storage device 1330 stores a program 1312 and/or a power system disturbance based model calibration engine 1314 for controlling the processor 1310. The processor 1310 performs instructions of the programs 1312, 1314, and thereby operates in accordance with any of the embodiments described herein. For example, the processor 1310 may calibrate a dynamic simulation engine, having system parameters, associated with a component of an electrical power system (e.g., a generator, wind turbine, etc.). The processor 1310 may receive, from a measurement data store 1360, measurement data measured by an electrical power system measurement unit (e.g., a phasor measurement unit, digital fault recorder, or other means of measuring frequency, voltage, current, or power phasors). The processor 1310 may then pre-condition the measurement data and set-up an optimization problem based on a result of the pre-conditioning. The system parameters of the dynamic simulation engine may be determined by solving the optimization problem with an iterative method until at least one convergence criteria is met. According to some embodiments, solving the optimization problem includes a Jacobian approximation that does not call the dynamic simulation engine if an improvement of residual meets a pre-defined criterion.
  • The programs 1312, 1314 may be stored in a compressed, uncompiled and/or encrypted format. The programs 1312, 1314 may furthermore include other program elements, such as an operating system, clipboard application, a database management system, and/or device drivers used by the processor 1310 to interface with peripheral devices.
  • As used herein, information may be “received” by or “transmitted” to, for example: (i) the platform 1300 from another device; or (ii) a software application or module within the platform 1300 from another software application, module, or any other source.
  • FIG. 14 illustrates a method 1400 of performing model calibration using a surrogate model in accordance with some embodiments. For example, the method 1400 may be performed by a computing system such as a web server, a user device, a database, an on-premises server, a cloud platform, a desktop PC, a mobile device, and the like. In one embodiment, the computing device receives 1410 a plurality of sequential events as described herein. The computing device filters 1420 the sequential events using similarity screening, where new events are evaluated to determine if they are different from the previously received events. In some embodiments, the event's dynamic features are coded as a bit-string. The features considered include P, Q, U, and F. In at least one embodiment, the Tanimoto coefficient is used for the similarity metrics. The computing device identifies 1430 the sequential parameters based on sensitivity, where the most sensitive parameter subset is determined based on any increasing number of events. Then the computing device performs 1440 Bayesian optimization to determine new parameter values by considering deviation from previous parameter estimates. In some embodiments, the weight for the penalty is determined from a Bayesian argument. When a new event is received 1410, then the process 1400 is performed again to re-adjust for the new event.
  • In some embodiments, the parameter determination for the subsequent event considers both the residual of simulated response and the statistical information of previous determined parameter values based on previous event. The events are filtered/screened using an event screening process that is based on features of the events including peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, steady state gain (P, Q, u, f) extracted from the time series of active power, reactive power, voltage and frequency.
  • In some other embodiments, the system 300 (shown in FIG. 3) stores a model of a device, such as generator 110. The model includes a plurality of parameters. The system 300 receives a plurality of events 314, 316, and 318 (shown in FIG. 1) associated with the device. In some embodiments, the events 314, 316, and 318 include sensor information of the event 314, 316, and 318 occurring at the device. In other embodiments, the sensor information is associated with a similar device. The system 300 filters the plurality of events to generate a plurality of unique events. The system 300 sequentially analyzes the plurality of unique events to determine a set of calibrated parameters 312 (shown in FIG. 3) for the model. The system 300 updates the model to include the set of calibrated parameters 312.
  • In some further embodiments, the system 300 executes the model based on one or more events of the plurality of events 314, 316, and 318 to generate one or more results and identifies one or more sensitive parameters, such as tunable parameters based on the one or more results. The system 300 may perform a Bayesian optimization on the one or more sensitive parameters to determine updated values for the one or more sensitive parameters. In these embodiments, the system 300 performs the Bayesian optimization by determining the updated values for the one or more sensitive parameters based on a nonlinear optimization. The objective function of the nonlinear optimization includes two terms. The first term is calculated as the residual between a simulated response based on the calibrated parameter and the measured response. The second term is calculated as a quadratic penalty term for deviations of parameters from one or more previous estimates. The weights for the quadratic penalty are derived from a Bayesian argument. The system 300 derives the quadratic penalty based on a covariance matrix of previous estimated parameters.
  • In still further embodiments, the system 300 codes each of the plurality of events based on one or more dynamic features of the corresponding event. The one or more dynamic features may include, but or not limited to, one or more of peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, and steady state gain of the corresponding event. The system 300 may extract the one or more dynamic features from a time series of active power, reactive power, voltage and frequency of the corresponding event.
  • The plurality of events may be each coded into a bit-string. The plurality of events may also be coded into bit vectors. The system 300 compares the plurality of binary vectors using the Taminoto coefficient. Then the system 300 discards similar subsequent events based on a similarity threshold and generates the plurality of unique events based on at least one remaining event.
  • In some embodiments, the plurality of unique events includes at least a first event, a second event, and a third event. The model includes a first set of parameters. The system 300 executes the model using the first set of parameters and the first event to generate a first set of results. The system 300 analyzes the first set of results to generate a second set of parameters. The system 300 executes the model using the second set of parameters and the second event to generate a second set of results. The system 300 analyzes the second set of results to generate a third set of parameters. The system 300 executes the model using the third set of parameters and the third event to generate a third set of results. The system 300 analyzes the third set of results to generate a fourth set of parameters.
  • In some further embodiments, the system 300 compares the first set of results, the second set of results, and the third set of results to determine the set of calibrated parameters 312. In these embodiments each set of the results includes residual error between the simulated response and the measured response for each of the one or more sensitive parameters. The system 300 compares the plurality of residual errors to select the set of calibrated parameters with minimal overall residual error.
  • FIG. 15 illustrates a process 1500 for sequential calibration using the system architecture 300 (shown in FIG. 3). In the exemplary embodiment, the system 300 receives a plurality of events, such as events 314, 316, and 318 (shown in FIG. 3) and events 1502, 1510, and 1514. In some embodiments, process 1500 is performed by one or more of the system architecture 300, the processor 1310, and the power system disturbance based model calibration engine 1314 (both shown in FIG. 3).
  • In the exemplary embodiment, process 1500 receives initial parameters 1504 and choses a first event 1502. In some embodiments, the first event 1502 is one of the received plurality of events. In other embodiments, the first event 1502 is a historical event or an event designated for testing purposes. The first event 1502 and the initial parameters 1504 are used as inputs for a model validation and calibration (MVC) process 1506, also known as MVC engine 1506. In the exemplary embodiment, MVC process 1506 is similar to MVC 500. In the exemplary embodiment, the first event 1502 includes at least the actual voltage, frequency, active power, reactive power for the event. The MVC process 1506 generates a first updated set of parameters 1508 based on how the initial parameters 1504 matched up with the first event 1502. In some embodiments, the MVC process 1506 uses the initial parameters 1504 and the voltage and frequency to predict the active and reactive power for the first event 1502. Then the MVC process 1506 compares the predicted active and reactive power to the actual active and reactive power for the first event 1502. The MVC process 1506 adjusts the initial parameters 1504 based on that comparison to generate an updated parameter set 1508.
  • In process 1500, the first updated set of parameters 1508 are then used with a second event 1510 as inputs into the MVC process 1506 to generate a second updated set of parameters 1512. The second updated set of parameters 1512 and then used with a third event 1514 to be another set of inputs for the MVC process 1506 to generate a third updated set of parameters 1516.
  • In the exemplary embodiment, the process 1500 continues to serially analyze events to generate updated parameter sets. For example, if the process 1500 receives 25 events, then each event will be analyzed in order to determine updated parameters based on that event and MVC process 1506, with the goal being that the parameters allow the MVC process 1506 to generate adjusted parameters to accurately predict the outcome of the plurality of events.
  • By analyzing each event individually and serially rather than as a group or in parallel, process 1500 allows for the parameters that affect each event to be analyzed, rather than have events that cancel out the effect of different parameters. For example, considering three different events, event-1, event-2, event-3, the sequential approach shown in process 1500 will generated three down-selected parameters subsets, say P-1, P-2 and P-3, corresponding to the three events. Each parameter subset is determined to be the best subset which can describe the corresponding event based on the parameter identifiability algorithm 525. Then the parameter subset P-1, P-2, P-3 may be further used for the parameter estimation process 530 based on the corresponding event. However, the parameter identifiability in a group calibration approach may not reach such an optimality. Furthermore, as the important parameters are identified for each event, and the parameters for each of these events are analyzed overall for the entire set of events. In this way, the parameters for each event contribute to the final parameters and allow the system to find the ideal parameters for the entire set while still taking into account each individual event.
  • FIG. 16 is a data flow diagram illustrating a sub-section 1600 of the architecture system 300 (shown in FIG. 3) executing the sequential calibration process 1500 (shown in FIG. 15). In the exemplary embodiment, the system architecture 1600 receives network models 1602, sub-system definitions 1604, dynamic models 1606, and event data 1608 at an input handling component 1610. In some embodiments, input handling component 1610 includes the event screening component 302 (shown in FIG. 3).
  • Steady state network models 1602 (sometimes called as power-flow data) can be either EMS or system planning models. In some embodiments, they may be in e-terra NETMOM or CIM13 format. Dynamic models 1606 can be in either PSS/E or PSLF or TSAT format. The system 1600 can also accept more than one dynamic data file when data is distributed among multiple files. In the exemplary embodiment, the network models 1602 and the dynamic models 1606 use the same naming convention for the network elements.
  • In the exemplary embodiment, the sub-system definitions 1604 are based on the network model 1602 and one or more maps of the power plant. A sub-system identification module combines the network model 1602 and the one or more maps to generate the sub-system definition 1604. In some embodiments, the sub-system definition 1604 is provided via an XML file that defines the POI(s) and generators that makes up a power plant. Power plants are defined by generators in the plant with its corresponding POI(s). A few examples of power plant sub-system definitions are listed below in TABLE 1.
  • In the exemplary embodiment, the system 1600 provides a user interface 1638 to facilitate defining the power plant starting from a potential POI. Potential POIs are identified as terminals/buses in the system having all required measurements (V, f, P, Q) to perform model validation and calibration. A measurement mapping module identifies terminals with V, f, P, Q measurements and lets the user search for radially connected generators starting from potential POIs. Sub-system definitions 1604 may also be saved for future use. In some embodiments, a sub-system definition 1604 is defined for each event 1608.
  • Events 1608 are where the voltage and/or the frequency of the power system changes. For example, an event 1608 may be a generator turning on. In some embodiments, the event 1608 has the same or similar attributes to a previous event 1608, such as that same generator turning on, the event 1608 is skipped to reduce redundant processing. In the exemplary embodiment, the event data or Phasor data 1608 will be imported from a variety of sources, such as, but not limited to, e-terraphasorpoint, openPDC, CSV files, COMTRADE files and PI historian. In the exemplary embodiment, the POIs will have at least voltage, frequency, real power and reactive power measurements. In some embodiments, voltage angle is substituted for frequency.
  • The network models 1602, sub-system definitions 1604, dynamic models 1606, and event data 1608 are analyzed by the system 1600 as described herein. In the exemplary embodiment disclosed herein, the model utilizes multiple disturbance events to validate and calibrate power system models for compliance with NERC mandated grid reliability requirements.
  • In some embodiments, the user accesses the user interface 1638 to set the total number of events 1608 that will be used in process 1500, set the stored file locations, and set the sequence that the events 1608 will be analyzed in.
  • In the exemplary embodiment, system 1600 includes a set of initial parameters 1612. In some embodiments, the set of initial parameters 1612 are based on the dynamic model 1606. The initial parameters 1612 and a first event 1614 are set as inputs and a model validation and calibration (MVC) 1616 is performed using those parameters 1612 and that first event 1614. In some embodiments, the MVC 1616 is performed by the simulation engine 308 (shown in FIG. 3). In some embodiments, the MVC 1616 is associated with the MVC process 1506 (shown in FIG. 15) and/or the MVC process 500 (shown in FIG. 5). The MVC 1616 generates a response 1618, which includes statistics about how the initial parameters 1612 performed in matching up to the first event 1614 based on the MVC process 1506. The MVC 1616 also generates a first set of updated parameters 1620 based on the event's performance in the MVC process 1506.
  • In some embodiments, the MVC 1616 uses the initial parameters 1612 and the voltage and frequency of the first event 1614 to predict the active and reactive power for the first event 1614. Then the MVC 1616 compares the predicted active and reactive power to the actual active and reactive power for the first event 1614. The MVC 1616 adjusts the parameters 1612 into the first set of updated parameters 1620 based on that comparison and also uses the comparison to generate the first response 1618.
  • In the exemplary embodiment, the system 1600 uses the first set of updated parameters 1620 with the second event 1622 into the MVC process 1506 to generate a second updated set of parameters 1628 and a second response 1626. The second updated set of parameters 1626 is then used with a third event 1630 to be another set of inputs for the MVC process 1506 to generate a third updated set of parameters 1636 and a third response 1634.
  • In the exemplary embodiment, the system 1600 continues to serially analyze events 1608 to generate updated parameter sets. For example, if the system 1600 receives 25 events 1608, then each event 1608 will be analyzed in order to determine updated parameters based on that event 1608 and the MVC process 1506, with the goal being that the parameters allow the MVC process 1506 to generate adjusted parameters to accurately predict the outcome of the plurality of events.
  • In some embodiments, the user may use the user interface 1638 to review the responses and the updated parameters. Furthermore, the user interface 1638 may allow the user to determine the order that the events 1608 are analyzed. In other embodiments, the system 1600 may serially analyze the events 1608 in a plurality of orders to determine the ideal set of updated parameters.
  • FIG. 17 is a data flow diagram illustrating the architecture system 300 (shown in FIG. 3) executing a parameter selection process 1700 in accordance with at least one embodiment. In the exemplary embodiment, parameter selection process 1700 is performed based on the results of process 1500 (shown in FIG. 15) and using architecture 1600 similar to that shown in FIG. 16 and/or architecture 300 similar to that shown in FIG. 3.
  • In the exemplary embodiment, process 1700 uses a model validation component 1704. In the exemplary embodiment, model validation component 1704 is similar to model validation 535 (shown in FIG. 5) and includes Steps 505-515 (shown in FIG. 5). In this embodiment, the model validation component 1704 performs Steps 505-515 and generates a response based on the results.
  • In the exemplary embodiment, the plurality of events 1608 are combined into an event set 1702, which allows the model validation component 1704 to playback all of the events 1608 in the event set 1702. In the exemplary embodiment, the model validation component 1704 analyzes a set of parameters, such as first set of parameters 1612, based on all of the events 1608 in the event set 1702 to generate a first response 1706.
  • In some embodiments, the model validation component 1704 generates a means square error for each event 1608 and then combines the individual means square errors into a single means square error for the event set 1702. The means square error is provided in the first response 1706. While means square error is described herein, one having skill in the art would understand that other methods of evaluating and ranking the parameter sets may be used.
  • The process 1700 further includes generating a second response 1708 where the first set of updated parameters 1620 are analyzed based on the event set 1702. A third response 1710 is generated based on the second set of updated parameters 1628 and a fourth response 1712 is generated based on a third set of updated parameters 1636. For each set of updated parameters generated by process 1500, process 1700 analyzes that set of updated parameters compared to the event set 1702.
  • The plurality of responses are then provided to a best result selection component 1714. The best result selection component 1714 compares the results for each set parameters to determine which is the optimal set of parameters to use for the model. In some embodiments, the best result selection component 1714 compares the mean square error provided in the results to the other results to determine which set of updated parameters to use. In other embodiments, the best result selection component 1714 compares the results to a threshold and when the results meet that threshold, the best result selection component 1714 choses the corresponding set of parameters. In some further embodiments, process 1700 is executed in parallel with process 1500. In these embodiments, when an updated set of parameters is generated in process 1500, then process 1700 is used to analyze those parameters. In some further embodiments, when the results meet the desired threshold, then process 1700 instructs process 1500 to end.
  • In some embodiments, the parameter sets are analyzed serially. In other embodiments, the parameters sets are analyzed in parallel.
  • In some embodiments, the parameter set selected by the best result selection component 1714 is transmitted to the user, such as in a dyd file, as the calibrated parameters 312 (shown in FIG. 3).
  • In other words, for each calibrated parameter the process 1700 conducts model validation 1704 across each event 1608. During the model validation 1704, the simulated response based on the calibrated parameter is compared with the measurement response. Then the best calibrated parameter is selected based on which leads to the minimal overall residual error between the simulated response and the measurement.
  • For example, the first set of parameters are used to generate the three simulated response against each event 1, 2, 3. The residual or deviation of the simulated response and the measurement response for each event is r1, r2, r3. The overall residual for all three events would be an average of the three residual r10=(r1+r2+r3)/3. The first step is repeated using the SECOND set of parameters and then the overall residual is generated across all events r20=(r1+r2+r3)/3.
  • The first step is repeated using the THIRD set of parameters. The overall residual is generated across all events r30=(r1+r2+r3)/3. The first step is repeated using the FOUR set of parameters to generate the overall residual across all events r40=(r+r2+r3)/3. The
  • The best result selection component 1714 selects the set of parameters with the minimal residual among all the residuals r10, r20, r30, r40.
  • At least one of the technical solutions to the technical problems provided by this system may include: (i) improved speed in modeling parameters; (ii) more robust models in response to measurement noise; (iii) compliance with NERC mandated grid reliability requirements; (iv) reduce the chance that an important parameter is not updated; (v) improved accuracy in parameter identifiability; (vi) improved accuracy in parameter estimation; and (vii) improved optimization of parameters based on event training.
  • The methods and systems described herein may be implemented using computer programming or engineering techniques including computer software, firmware, hardware, or any combination or subset thereof, wherein the technical effects may be achieved by performing at least one of the following steps: (a) store a model of the power system, wherein the model includes a plurality of events; (b) receive, from the at least one sensor, event data associated with an event of the power system; (c) analyze the event data to determine if the event is different from the plurality of events; (d) determine at least one parameters associated with the event; and (e) optimize the model to account for the event.
  • The computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein. The methods may be implemented via one or more local or remote processors, transceivers, servers, and/or sensors, and/or via computer-executable instructions stored on non-transitory computer-readable media or medium.
  • Additionally, the computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein. The computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium.
  • A processor or a processing element may employ artificial intelligence and/or be trained using supervised or unsupervised machine learning, and the machine learning program may employ a neural network, which may be a convolutional neural network, a deep learning neural network, or a combined learning module or program that learns in two or more fields or areas of interest. Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs.
  • Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as image data, text data, report data, and/or numerical analysis. The machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition, and may be trained after processing multiple examples. The machine learning programs may include Bayesian program learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing—either individually or in combination. The machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or machine learning.
  • In supervised machine learning, a processing element may be provided with example inputs and their associated outputs, and may seek to discover a general rule that maps inputs to outputs, so that when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output. In unsupervised machine learning, the processing element may be required to find its own structure in unlabeled example inputs. In one embodiment, machine learning techniques may be used to extract data about the computer device, the user of the computer device, the computer network hosting the computer device, services executing on the computer device, and/or other data.
  • Based upon these analyses, the processing element may learn how to identify characteristics and patterns that may then be applied to training models, analyzing sensor data, and detecting abnormalities.
  • As will be appreciated based upon the foregoing specification, the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the disclosure. The computer-readable media may be, for example, but is not limited to, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), and/or any transmitting/receiving medium, such as the Internet or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network.
  • These computer programs (also known as programs, software, software applications, “apps”, or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The “machine-readable medium” and “computer-readable medium,” however, do not include transitory signals. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • As used herein, a processor may include any programmable system including systems using micro-controllers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are example only, and are thus not intended to limit in any way the definition and/or meaning of the term “processor.”
  • As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a processor, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are example only, and are thus not limiting as to the types of memory usable for storage of a computer program.
  • In another embodiment, a computer program is provided, and the program is embodied on a computer-readable medium. In an example embodiment, the system is executed on a single computer system, without requiring a connection to a server computer. In a further example embodiment, the system is being run in a Windows® environment (Windows is a registered trademark of Microsoft Corporation, Redmond, Wash.). In yet another embodiment, the system is run on a mainframe environment and a UNIX® server environment (UNIX is a registered trademark of X/Open Company Limited located in Reading, Berkshire, United Kingdom). In a further embodiment, the system is run on an iOS® environment (iOS is a registered trademark of Cisco Systems, Inc. located in San Jose, Calif.). In yet a further embodiment, the system is run on a Mac OS® environment (Mac OS is a registered trademark of Apple Inc. located in Cupertino, Calif.). In still yet a further embodiment, the system is run on Android® OS (Android is a registered trademark of Google, Inc. of Mountain View, Calif.). In another embodiment, the system is run on Linux® OS (Linux is a registered trademark of Linus Torvalds of Boston, Mass.). The application is flexible and designed to run in various different environments without compromising any major functionality.
  • In some embodiments, the system includes multiple components distributed among a plurality of computer devices. One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium. The systems and processes are not limited to the specific embodiments described herein. In addition, components of each system and each process can be practiced independent and separate from other components and processes described herein. Each component and process can also be used in combination with other assembly packages and processes. The present embodiments may enhance the functionality and functioning of computers and/or computer systems.
  • As used herein, an element or step recited in the singular and preceded by the word “a” or “an” should be understood as not excluding plural elements or steps, unless such exclusion is explicitly recited. Furthermore, references to “example embodiment,” “exemplary embodiment,” or “one embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
  • The patent claims at the end of this document are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being expressly recited in the claim(s).
  • This written description uses examples to disclose the disclosure, including the best mode, and also to enable any person skilled in the art to practice the disclosure, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.

Claims (20)

1. A system for sequential power system model calibration comprising a computing device including at least one processor in communication with at least one memory device, wherein the at least one processor is programmed to:
store a model of a device, wherein the model includes a plurality of parameters;
receive a plurality of events associated with the device;
filter the plurality of events to generate a plurality of unique events;
sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model; and
update the model to include the set of calibrated parameters.
2. The system in accordance with claim 1, wherein the at least one processor is further programmed to:
execute the model based on one or more events of the plurality of events to generate one or more results; and
identify one or more sensitive parameters based on the one or more results.
3. The system in accordance with claim 2, wherein the at least one processor is further programmed to:
perform a Bayesian optimization on the one or more sensitive parameters to determine updated values for the one or more sensitive parameters.
4. The system in accordance with claim 3, wherein to perform the Bayesian optimization the at least one processor is further programmed to:
determine the updated values for the one or more sensitive parameters based on a nonlinear optimization, wherein an objective function of the nonlinear optimization includes a first term and a second term;
calculate the first term as the residual between a simulated response based on the set of calibrated parameters and a measured response; and
calculate the second term as a quadratic penalty term for deviations of parameters from one or more previous estimates, wherein one or more weights for the quadratic penalty are derived from a Bayesian argument.
5. The system in accordance with claim 4, wherein the at least one processor is further programmed to derive the quadratic penalty based on a covariance matrix of previous estimated parameters.
6. The system in accordance with claim 1, wherein the at least one processor is further programmed to code each of the plurality of events based on one or more dynamic features of the corresponding event.
7. The system in accordance with claim 6, wherein the plurality of events are each coded into a bit-string.
8. The system in accordance with claim 6, wherein the one or more dynamic features include one or more of peak value, bottom value, overshoot percentage, a rising time, a settling time, a phase shift, a damping ratio, an energy function, and a cumulative deviation in energy, Fourier transformation spectrum information, principal component, and steady state gain of the corresponding event.
9. The system in accordance with claim 6, wherein the at least one processor is further programed to extract the one or more dynamic features from a time series of active power, reactive power, voltage and frequency of the corresponding event.
10. The system in accordance with claim 6, wherein the at least one processor is further programmed to:
code each of the plurality of events as a binary vector;
compare the plurality of binary vectors using the Taminoto coefficient;
discard similar subsequent events based on a similarity threshold; and
generate the plurality of unique events based on at least one remaining event.
11. The system in accordance with claim 1, wherein the plurality of unique events includes a first event and a second event, wherein the model includes a first set of parameters, and where the at least one processor is further programed to:
execute the model using the first set of parameters and the first event to generate a first set of results;
analyze the first set of results to generate a second set of parameters;
execute the model using the second set of parameters and the second event to generate a second set of results; and
analyze the second set of results to generate a third set of parameters.
12. The system in accordance with claim 11, wherein the plurality of unique events includes a third event, and where the at least one processor is further programmed to:
execute the model using the third set of parameters and the third event to generate a third set of results; and
analyze the third set of results to generate a fourth set of parameters.
13. The system in accordance with claim 12, wherein the at least one processor is further programmed to compare the first set of results, the second set of results, and the third set of results to determine the set of calibrated parameters.
14. The system in accordance with claim 13, wherein each set of results includes residual error between a simulated response and a measured response, wherein the at least one processor is further programmed to compare the plurality of residual errors to select the set of calibrated parameters with minimal overall residual error.
15. The system in accordance with claim 1, wherein the plurality of events include sensor data associated with the device during the corresponding event.
16. The system in accordance with claim 1, wherein the device includes a power system and the model simulates behavior of the power system.
17. A computer-implemented method for sequential power system model calibration, the method implemented by a computing device including at least one processor in communication with at least one memory device, wherein the method includes:
storing a model of a device, wherein the model includes a plurality of parameters;
receiving a plurality of events associated with the device;
filtering the plurality of events to generate a plurality of unique events;
sequentially analyzing the plurality of unique events to determine a set of calibrated parameters for the model; and
updating the model to include the set of calibrated parameters.
18. The method in accordance with claim 17 further comprising:
executing the model based on one or more events of the plurality of events to generate one or more results;
identifying one or more sensitive parameters based on the one or more results; and
performing a Bayesian optimization on the one or more sensitive parameters to determine updated values for the one or more sensitive parameters.
19. The method in accordance with claim 17 further comprising:
coding each of the plurality of events based on one or more dynamic features of the corresponding event, wherein the plurality of events are each coded into a binary vector;
comparing the plurality of binary vectors using the Taminoto coefficient;
discarding similar subsequent events based on a similarity threshold; and
generating the plurality of unique events based on at least one remaining event.
20. A non-transitory computer-readable storage media having computer-executable instructions embodied thereon, wherein when executed by a computing device having at least one processor coupled to at least one memory device, the computer-executable instructions cause the processor to:
store a model of a device, wherein the model includes a plurality of parameters;
receive a plurality of events associated with the device;
filter the plurality of events to generate a plurality of unique events;
sequentially analyze the plurality of unique events to determine a set of calibrated parameters for the model; and
update the model to include the set of calibrated parameters.
US16/572,111 2019-04-12 2019-09-16 Systems and methods for sequential power system model parameter estimation Abandoned US20200327435A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/572,111 US20200327435A1 (en) 2019-04-12 2019-09-16 Systems and methods for sequential power system model parameter estimation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962833492P 2019-04-12 2019-04-12
US16/572,111 US20200327435A1 (en) 2019-04-12 2019-09-16 Systems and methods for sequential power system model parameter estimation

Publications (1)

Publication Number Publication Date
US20200327435A1 true US20200327435A1 (en) 2020-10-15

Family

ID=72748104

Family Applications (4)

Application Number Title Priority Date Filing Date
US16/572,111 Abandoned US20200327435A1 (en) 2019-04-12 2019-09-16 Systems and methods for sequential power system model parameter estimation
US16/601,732 Active 2040-09-14 US11544426B2 (en) 2019-04-12 2019-10-15 Systems and methods for enhanced sequential power system model parameter estimation
US16/690,965 Active 2040-06-10 US11347907B2 (en) 2019-04-12 2019-11-21 Systems and methods for distributed power system model calibration
US16/698,058 Abandoned US20200327264A1 (en) 2019-04-12 2019-11-27 Systems and methods for enhanced power system model calibration

Family Applications After (3)

Application Number Title Priority Date Filing Date
US16/601,732 Active 2040-09-14 US11544426B2 (en) 2019-04-12 2019-10-15 Systems and methods for enhanced sequential power system model parameter estimation
US16/690,965 Active 2040-06-10 US11347907B2 (en) 2019-04-12 2019-11-21 Systems and methods for distributed power system model calibration
US16/698,058 Abandoned US20200327264A1 (en) 2019-04-12 2019-11-27 Systems and methods for enhanced power system model calibration

Country Status (1)

Country Link
US (4) US20200327435A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023058275A1 (en) * 2021-10-05 2023-04-13 Mitsubishi Electric Corporation Calibration system and method for calibrating an industrial system model using simulation failure

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11221353B2 (en) * 2018-07-06 2022-01-11 Schneider Electric USA, Inc. Systems and methods for analyzing power quality events in an electrical system
DE102019208263A1 (en) * 2019-06-06 2020-12-10 Robert Bosch Gmbh Method and device for determining a control strategy for a technical system
DE102019208262A1 (en) * 2019-06-06 2020-12-10 Robert Bosch Gmbh Method and device for determining model parameters for a control strategy of a technical system with the help of a Bayesian optimization method
US11042132B2 (en) * 2019-06-07 2021-06-22 Battelle Memorial Institute Transformative remedial action scheme tool (TRAST)
US20210133376A1 (en) * 2019-11-04 2021-05-06 Global Energy Interconnection Research Institute Co. Ltd Systems and methods of parameter calibration for dynamic models of electric power systems
US11551083B2 (en) * 2019-12-17 2023-01-10 Soundhound, Inc. Neural network training from private data
US11444483B2 (en) * 2020-01-14 2022-09-13 Hitachi Energy Switzerland Ag Adaptive state estimation for power systems
US11797340B2 (en) * 2020-05-14 2023-10-24 Hewlett Packard Enterprise Development Lp Systems and methods of resource configuration optimization for machine learning workloads
WO2022086176A1 (en) * 2020-10-21 2022-04-28 포항공과대학교 산학협력단 Method for distribution of phasor-aided state estimation to monitor operating state of large scale power system and method for processing defect data in mixed distributed state estimation by using same
CN112731826A (en) * 2020-12-11 2021-04-30 国网宁夏电力有限公司吴忠供电公司 Internet of things control method based on intelligent sensor
CN112487592B (en) * 2020-12-16 2022-01-18 北京航空航天大学 Bayesian network-based task reliability modeling analysis method
CN112651112B (en) * 2020-12-17 2023-07-11 湖南大学 Collaborative decision-making method, system and equipment for electric energy transaction and system operation of internet micro-grid
CN112653185B (en) * 2020-12-22 2023-01-24 广东电网有限责任公司电力科学研究院 Efficiency evaluation method and system of distributed renewable energy power generation system
CN113139232B (en) * 2021-01-15 2023-12-26 中国人民解放军91550部队 Aircraft post-positioning method and system based on incomplete measurement
US11176442B1 (en) * 2021-02-11 2021-11-16 North China Electric Power University Fast power system disturbance identification using enhanced LSTM network with renewable energy integration
US11170304B1 (en) * 2021-02-25 2021-11-09 North China Electric Power University Bad data detection algorithm for PMU based on spectral clustering
US11573023B2 (en) * 2021-03-07 2023-02-07 Mitsubishi Electric Research Laboratories, Inc. Controlling vapor compression system using probabilistic surrogate model
EP4060559B1 (en) * 2021-03-15 2024-01-10 Siemens Aktiengesellschaft Training data set, training and artificial neural network for estimating the condition of a power network
US20220300679A1 (en) * 2021-03-19 2022-09-22 X Development Llc Simulating electrical power grid operations
CN113094887A (en) * 2021-03-31 2021-07-09 清华大学 Optimization method and device for frequency shift electromagnetic transient simulation and electronic equipment
US20220335179A1 (en) * 2021-04-07 2022-10-20 Mitsubishi Electric Research Laboratories, Inc. System and Method for Calibrating a Model of Thermal Dynamics
CN113408741B (en) * 2021-06-22 2022-12-27 重庆邮电大学 Distributed ADMM machine learning method of self-adaptive network topology
CN113433502B (en) * 2021-07-28 2022-09-06 武汉市华英电力科技有限公司 Capacitance and inductance tester calibration method and device based on waveform simulation
CN113569411B (en) * 2021-07-29 2023-09-26 湖北工业大学 Disaster weather-oriented power grid operation risk situation awareness method
CN113779493A (en) * 2021-09-16 2021-12-10 重庆大学 Distributed intelligent energy management method for multiple intelligent families
US11868689B2 (en) * 2021-10-11 2024-01-09 KLA Corp. Systems and methods for setting up a physics-based model
CN114047372B (en) * 2021-11-16 2024-03-12 国网福建省电力有限公司营销服务中心 Voltage characteristic-based platform region topology identification system
US11916382B2 (en) * 2021-11-19 2024-02-27 Caterpillar Inc. Optimized operation plan for a power system
FR3131988A1 (en) * 2022-01-19 2023-07-21 Electricite De France Bayesian forecast of individual consumption and balancing of an electricity network
CN115659779B (en) * 2022-09-26 2023-06-23 国网江苏省电力有限公司南通供电分公司 New energy access optimization strategy for multi-DC feed-in receiving end power grid
CN116341394B (en) * 2023-05-29 2023-09-15 南方电网数字电网研究院有限公司 Hybrid driving model training method, device, computer equipment and storage medium
CN116433225B (en) * 2023-06-12 2023-08-29 国网湖北省电力有限公司经济技术研究院 Multi-time scale fault recovery method, device and equipment for interconnected micro-grid
CN117411184A (en) * 2023-10-26 2024-01-16 唐山昌宏科技有限公司 Intelligent command system for emergency treatment of medium-low voltage power supply

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE468691B (en) 1991-06-26 1993-03-01 Asea Brown Boveri METHOD IS TO CREATE A LOGICAL DESCRIPTION OF A SIGNAL THROUGH IDENTIFICATION OF THEIR CONDITION AND A CHANGE OF THE CONDITION
US20070055392A1 (en) 2005-09-06 2007-03-08 D Amato Fernando J Method and system for model predictive control of a power plant
US9092593B2 (en) 2007-09-25 2015-07-28 Power Analytics Corporation Systems and methods for intuitive modeling of complex networks in a digital environment
US9557723B2 (en) 2006-07-19 2017-01-31 Power Analytics Corporation Real-time predictive systems for intelligent energy monitoring and management of electrical power networks
US20100125347A1 (en) 2008-11-19 2010-05-20 Harris Corporation Model-based system calibration for control systems
WO2012103246A2 (en) * 2011-01-25 2012-08-02 Power Analytics Corporation Systems and methods for real-time dc microgrid power analytics for mission-critical power systems
WO2012103244A2 (en) 2011-01-25 2012-08-02 Power Analytics Corporation Systems and methods for automated model-based real-time simulation of a microgrid for market-based electric power system optimization
KR101219545B1 (en) 2011-09-14 2013-01-09 주식회사 파워이십일 Optimized parameter estimation method for power system
US20130253718A1 (en) 2012-03-23 2013-09-26 Power Analytics Corporation Systems and methods for integrated, model, and role-based management of a microgrid based on real-time power management
US9633315B2 (en) 2012-04-27 2017-04-25 Excalibur Ip, Llc Method and system for distributed machine learning
US9645558B2 (en) 2012-09-29 2017-05-09 Operation Technology, Inc. Dynamic parameter tuning using particle swarm optimization
US9864820B2 (en) 2012-10-03 2018-01-09 Operation Technology, Inc. Generator dynamic model parameter estimation and tuning using online data and subspace state space model
CN103530819A (en) 2013-10-18 2014-01-22 国家电网公司 Method and equipment for determining output power of grid-connected photovoltaic power station power generation system
US9645219B2 (en) * 2013-11-01 2017-05-09 Honeywell International Inc. Systems and methods for off-line and on-line sensor calibration
US20150149128A1 (en) 2013-11-22 2015-05-28 General Electric Company Systems and methods for analyzing model parameters of electrical power systems using trajectory sensitivities
WO2015154216A1 (en) 2014-04-08 2015-10-15 Microsoft Technology Licensing, Llc Deep learning using alternating direction method of multipliers
US9660458B2 (en) * 2014-05-06 2017-05-23 Google Inc. Electrical load management
US9916540B2 (en) * 2015-01-22 2018-03-13 Microsoft Technology Licensing, Llc Scalable-effort classifiers for energy-efficient machine learning
US10103666B1 (en) * 2015-11-30 2018-10-16 University Of South Florida Synchronous generator modeling and frequency control using unscented Kalman filter
CN106709626A (en) 2016-11-14 2017-05-24 国家电网公司 Power grid development dynamic comprehensive evaluation method based on Bayesian network
CN106845794A (en) 2016-12-28 2017-06-13 国电南瑞科技股份有限公司 A kind of online check method of electric network model that system is dispatched for intelligent grid
CN106786671B (en) 2017-01-19 2019-05-31 广西电网有限责任公司电力科学研究院 A kind of intelligent quantization weighting Hydropower Unit automatic electricity generation control system and algorithm
US10371740B2 (en) 2017-05-31 2019-08-06 University Of Tennessee Research Foundation Power system disturbance localization using recurrence quantification analysis and minimum-volume-enclosing ellipsoid
US10809683B2 (en) 2017-10-26 2020-10-20 General Electric Company Power system model parameter conditioning tool
WO2019109084A1 (en) * 2017-12-01 2019-06-06 California Institute Of Technology Optimization framework and methods for adaptive ev charging
CN109119999A (en) 2018-07-24 2019-01-01 国家电网公司西北分部 A kind of model parameters of electric power system discrimination method and device
US10804702B2 (en) * 2018-10-11 2020-10-13 Centrica Business Solutions Belgium Self-organizing demand-response system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023058275A1 (en) * 2021-10-05 2023-04-13 Mitsubishi Electric Corporation Calibration system and method for calibrating an industrial system model using simulation failure

Also Published As

Publication number Publication date
US20200327205A1 (en) 2020-10-15
US11544426B2 (en) 2023-01-03
US20200327206A1 (en) 2020-10-15
US20200327264A1 (en) 2020-10-15
US11347907B2 (en) 2022-05-31

Similar Documents

Publication Publication Date Title
US20200327435A1 (en) Systems and methods for sequential power system model parameter estimation
US20200379424A1 (en) Systems and methods for enhanced power system model validation
US11636557B2 (en) Systems and methods for enhanced power system model validation
WO2020197533A1 (en) Surrogate of a simulation engine for power system model calibration
CA2916454C (en) Distribution transformer heavy loading and overloading mid-term and short-term pre-warning analytics model
US8078552B2 (en) Autonomous adaptive system and method for improving semiconductor manufacturing quality
US10809683B2 (en) Power system model parameter conditioning tool
US7881814B2 (en) Method and system for rapid modeling and verification of excitation systems for synchronous generators
US20210399546A1 (en) Power system measurement based model calibration with enhanced optimization
CN104572333A (en) Systems and methods for detecting, correcting, and validating bad data in data streams
CN110221976B (en) Quantitative evaluation method for quality of metering terminal software based on measurement technology
Wang et al. A DRL-aided multi-layer stability model calibration platform considering multiple events
US20210064713A1 (en) Systems and methods for interactive power system model calibration
CN111680407A (en) Satellite health assessment method based on Gaussian mixture model
Önder et al. Classification of smart grid stability prediction using cascade machine learning methods and the internet of things in smart grid
US20210124854A1 (en) Systems and methods for enhanced power system model parameter estimation
CN114583767B (en) Data-driven wind power plant frequency modulation response characteristic modeling method and system
Fellner et al. Data driven transformer level misconfiguration detection in power distribution grids
Benqlilou Data reconciliation as a framework for chemical processes optimization and control
Brosinsky et al. Machine learning and digital twins: monitoring and control for dynamic security in power systems
US20200387121A1 (en) Transformative remedial action scheme tool (trast)
Wang Operationalizing synchrophasors for enhanced grid reliability and asset utilization
Mares et al. An Architecture to Improve Energy-Related Time-Series Model Validity Based on the Novel rMAPE Performance Metric
US20240070352A1 (en) Simulating electrical grid transmission and distribution using multiple simulators
Cornalino et al. Probabilistic Modeling for the Uruguayan electrical load: present capacity and current improvements

Legal Events

Date Code Title Description
AS Assignment

Owner name: GENERAL ELECTRIC COMPANY, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, HONGGANG;MENON, ANUP;SIGNING DATES FROM 20190913 TO 20190916;REEL/FRAME:050391/0561

AS Assignment

Owner name: UNITED STATES DEPARTMENT OF ENERGY, DISTRICT OF COLUMBIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:GENERAL ELECTRIC GLOBAL RESEARCH;REEL/FRAME:051839/0020

Effective date: 20191114

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION