US20090012770A1 - Circuit simulation - Google Patents

Circuit simulation Download PDF

Info

Publication number
US20090012770A1
US20090012770A1 US12/060,556 US6055608A US2009012770A1 US 20090012770 A1 US20090012770 A1 US 20090012770A1 US 6055608 A US6055608 A US 6055608A US 2009012770 A1 US2009012770 A1 US 2009012770A1
Authority
US
United States
Prior art keywords
state
branches
zero
branch
equations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/060,556
Inventor
Oleg Wasynczuk
Juri V. Jatskevich
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pc Krause & Associates Inc
Original Assignee
Pc Krause & Associates Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pc Krause & Associates Inc filed Critical Pc Krause & Associates Inc
Priority to US12/060,556 priority Critical patent/US20090012770A1/en
Assigned to P.C. KRAUSE & ASSOCIATES, INC. reassignment P.C. KRAUSE & ASSOCIATES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JATSKEVICH, JURI, WASYNCZUK, OLEG
Publication of US20090012770A1 publication Critical patent/US20090012770A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/36Circuit design at the analogue level
    • G06F30/367Design verification, e.g. using simulation, simulation program with integrated circuit emphasis [SPICE], direct methods or relaxation methods

Definitions

  • the present invention relates to the design, modeling, simulation, and emulation of electronic circuitry. More specifically, the present invention relates to numerical time-domain simulation of analog or digital electrical circuits using mathematical expressions.
  • Present simulation systems suffer from limitations in the kinds and topologies of circuits to which they may be applied.
  • the complexity of systems to be simulated is also limited in current systems by various inefficiencies in the simulation modeling process, including state selection and state equation building methods. There is thus a need for further contributions and improvements to circuit simulation technology.
  • the underlying numerical algorithms are based upon a state-space model of the system being simulated or analyzed.
  • the system to be simulated is specified either in the form of a text file using a well-defined syntax, or graphically using boxes or icons to represent computational elements such as summers, multipliers, integrators, function generators, and/or transfer functions, for example.
  • General purpose mathematical packages operate the models by assembling a state-space model of the system from the user-supplied description. The time-domain response is then calculated numerically using established integration algorithms.
  • the system is described as an electrical circuit, the fundamental branches of which may include resistors, inductors, capacitors, voltage sources, and/or current sources, for example.
  • Other circuit elements such as diodes, transistors, or electromechanical devices may also be defined as a variation or a combination (sub-circuit) of the fundamental branches.
  • the model developer describes the circuit in the form of a list of parameters of the fundamental or user-defined circuit elements and the layout of the circuit. Nodal or modified-nodal analysis techniques are then employed to simulate operation of the circuit.
  • the differential equations associated with each inductive and capacitive branch are modeled in discrete form using difference equations to relate the branch current and voltage at a given instant of time to the branch current and/or voltage at one or more preceding instants of time.
  • the difference equations for the overall circuit are then assembled automatically and solved using established methods to simulate the time-domain response.
  • Each fundamental branch includes a switch, resistor r br , inductor L br , voltage source e br , conductance g br , capacitor P br , and current source j br .
  • Each parameter can be fixed or time-varying, and ideal components can be modeled by setting the remaining parameters to zero.
  • the ASMG Given the parameters for each branch and the list of nodes that the branches connect, the ASMG generates a state-space model of the overall circuit.
  • the state-space representation is calculated and solved using numerical methods. If a change in switching state occurs, the state model generator then recalculates the state-space model and establishes the appropriate initial conditions for the new topology.
  • a disadvantage of this system is that it cannot be used to simulate circuits that include loops composed of voltage sources, capacitors, and/or resistors. This limitation dramatically hindered the ability of the ASMG to simulate high-frequency switching transients of power-electronic-based systems.
  • a “spanning tree” over a graph (comprising a set of branches and a set of nodes to which the branches are connected) is defined to be a subset of the set of branches such that at least one branch in the subset is connected to each node from the set of nodes, yet no loop can be formed from the branches in the subset.
  • topology change event occurs when the control signal for one or more switching elements causes the switching state of that element to change.
  • the nodes to which that element is connected will then become (or cease to be) galvanically connected to another portion of the circuit. In most cases, this change affects the state equations for the overall circuit.
  • One form of the present invention is a method, including creating one or more data structures sufficient to model an electronic circuit as a collection of n (at least two) elements. These comprise zero or more LRV elements, zero or more CRI elements, and zero or more switching elements.
  • the LRV elements each have at least one of (a) a non-zero inductance parameter L br , (b) a non-zero resistance parameter r br , and (c) a non-zero voltage source parameter e br , but neither a non-zero capacitance parameter, nor a non-zero current source parameter, nor a switch parameter.
  • the CRI elements each have at least one of (a) a non-zero capacitance parameter C br , (b) a non-zero resistance parameter r br , or (c) a non-zero current source parameter j br , but neither a non-zero inductance parameter, nor a non-zero voltage source parameter, nor a switch parameter.
  • the switching elements each have a switch state and neither a non-zero inductance parameter, a non-zero capacitance parameter, a non-zero resistance parameter, a non-zero voltage source parameter, nor a non-zero current source parameter.
  • a first set of state equations is automatically generated from the one or more data structures, and operation of the electronic circuit is simulated by application of said first set of state equations.
  • the collection comprises either (1) an LRV element for which at least two of L br , r br , or e br are non-zero, or (2) a CRI element for which at least two of C br , r br , or j br are non-zero.
  • the simulating step includes producing state output data.
  • some or all of the parameters in the first set of state equations change over (simulation) time as a function of the state output data.
  • some or all of the parameters change over (simulation) time due to a time-varying parameter of at least one element in the collection.
  • a second set of state equations is generated from the one or more data structures upon the occurrence of a first topology change event.
  • the generating step simply involves modifying only the subset of said first set of state equations that depends on the one or more switching elements that have changed.
  • each unique vector of switch states represents a topology of the overall circuit
  • the method also includes (1) storing the first set of state equations in a cache; (2) after a second topology change event, determining whether a set of state equations in the cache represents the new topology; and (3a) if the determining step is answered in the affirmative, using the set of state equations that represents the new topology to simulate operation of the circuit after the second topology change event; or (3b) if the determining step is answered in the negative, building a third set of state equations that represents the new topology, and using the third set of state equations to simulate operation of the circuit after the second topology change event.
  • the method also includes (1) storing said second set of state equations in a cache (2) after a third topology change event, deciding whether a set of state equations in the cache represents the new topology; and (3a) if the deciding step is concluded in the affirmative, using the set of state equations from the cache that represents the new topology to simulate operation of the circuit after the third topology change event; or (3b) if the deciding step is concluded in the negative, building a new set of state equations that represents the new topology, and using the new set of state equations to simulate operation of the circuit after the third topology change event.
  • Another form of the present invention is a method including creating one or more data structures that together store characteristics of some active branches B active that make up a graph of nodes and branches that form a circuit, wherein B active consists of (i) a (possibly empty) set B L of inductive branches, each having a non-zero inductive component but neither a capacitive component nor a variable switch state; (ii) a (possibly empty) set B C of zero or more capacitive branches, each having a non-zero capacitive component but neither an inductive component nor a variable switch state; and (iii) a (possibly empty) set B A of additional branches, each having neither an inductive component nor a capacitive component.
  • B active is partitioned into a first branch set B tree active and a second branch set B link active , where the branches in B tree active form a spanning tree over B active , giving priority in said partitioning to branches not in B L over branches in B L .
  • a fifth branch set B CA is identified as the union of (i) B link CA , (ii) B C ⁇ B tree active , and (iii) those branches in B tree active that form a closed graph when combined with B link CA .
  • B CA is partitioned into a sixth branch set ⁇ tilde over (B) ⁇ tree CA and a seventh branch set ⁇ tilde over (B) ⁇ link CA , where the branches in ⁇ tilde over (B) ⁇ tree CA form a spanning tree over B CA , giving priority in said partitioning to branches in B C over branches not in B C .
  • An eighth branch set B tree C ⁇ tilde over (B) ⁇ tree CA ⁇ B C is identified.
  • a set of state variables is selected, comprising (a) for each branch of B link L , either the inductor current or inductor flux, and (b) for each branch of ⁇ tilde over (B) ⁇ tree C , either the capacitor voltage or capacitor charge.
  • a plurality of states of the circuit are simulated using the set of state variables.
  • the partitioning steps each comprise an application of a weighted spanning tree algorithm, such as, for some positive numbers w L and w C , (a) for the partitioning of B active , a minimum spanning tree algorithm is used with weight function
  • ⁇ L ⁇ ( b j ) ⁇ w L ⁇ ⁇ if ⁇ ⁇ branch ⁇ ⁇ b j ⁇ B L 0 ⁇ ⁇ otherwise ;
  • ⁇ C ⁇ ( b j ) ⁇ w C ⁇ ⁇ if ⁇ ⁇ branch ⁇ ⁇ b j ⁇ B C 0 ⁇ ⁇ otherwise .
  • Another form of the invention is a system, including a processor and a computer-readable medium in communication with the processor, where the medium contains programming instructions executable by the processor to:
  • the programming instructions include a state equation building module, a solver module for ordinary differential equations, and a switching logic module.
  • the building is performed by the state equation building module, the solving and calculating are performed by the solver module; and the determining is performed by the switching logic module.
  • the obtaining is performed by said switching logic module, while in others the obtaining is performed by said state equation building module.
  • At some time t j at least two switching elements are each either rising-sensitive or falling-sensitive switches. Rising-sensitive switches change switching state if and only if a controlling element of the state vector has passed from a negative value to a non-negative value. Conversely, falling-sensitive switches change switching state if and only if a controlling element of the state vector has passed from a positive value to a non-positive value.
  • the function is the arithmetic maximum of (1) a maximum of all elements of the state vector that control rising-sensitive switches, and (2) the negative of the minimum of all controlling elements of the state vector that control falling-sensitive switches.
  • a still further form of the invention is a system for simulating electronic circuits having a processor and a computer-readable medium in communication with said processor, where the medium contains programming instructions executable by the processor to read element parameters and node connection information from a data stream.
  • the stream includes at least one switch type specification selected from the group consisting of: a unidirectional, unlatched switch; a bidirectional, unlatched switch; a unidirectional, latched switch; and a bidirectional, latched switch.
  • the instructions are further executable by the processor automatically to calculate state equations for the circuit given the states of switches specified by the at least one switch type specification.
  • FIG. 1 is a diagram of a generic branch for modeling a circuit using a prior art system.
  • FIG. 2 is a diagram of two branches used to model circuits in another prior art system.
  • FIG. 3 is a block diagram of the overall process for simulating a circuit according to the present invention.
  • FIGS. 4A-4C are branches used to model circuit components using the present invention.
  • FIG. 5 is a block diagram of the run-time computational routines in one embodiment of the invention.
  • FIG. 6 is a block diagram detailing the inductive link current calculator for use with the routines shown in FIG. 5 .
  • FIG. 7 is a block diagram detailing the capacitive tree voltage calculator for use with the routines shown in FIG. 5 .
  • FIG. 8 is a block diagram detailing the resistive network algebraic equation calculator for use with the routines shown in FIG. 5 .
  • FIG. 9 is a block diagram detailing the inductive network state/output equation calculator for use with the routines shown in FIG. 5 .
  • FIG. 10 is a block diagram detailing the capacitive network state/output equation calculator for use with the routines shown in FIG. 5 .
  • FIG. 11 is a schematic diagram of a switch element.
  • FIG. 12 is a block diagram of the interaction between subnetworks as analyzed for use with the present invention.
  • the system illustrated in FIGS. 3-10 and described herein simulates operation of a circuit in the time domain by collecting component parameters (variable and/or constant) and the overall circuit topology, then establishing a minimal state space for each active topology (as they are encountered), building state equations for that state space, and solving those equations for relevant steps in time.
  • component parameters variable and/or constant
  • the above process can be very much streamlined using the additional knowledge and techniques provided in the present invention.
  • a description 21 of the circuit including an identification of constant and variable parameters, initial conditions, and the like are fed into state model generator 31 , which provides state model 23 to solver 33 .
  • Solver 33 generates the simulation out put 25 for data consumers (such as log files, graphical visualization tools, and the like) as are known in the art.
  • Solver 33 also provides continuous state information 27 to switching logic 35 , which determines the state of one or more switches in the circuit should be changed. The result of this analysis, switching state 29 , is provided to state model generator 31 . If topological changes to the circuit are indicated in switching state 29 , state model generator 31 updates the state model and passes that updated model to solver 33 for continued simulation.
  • electrical networks can be composed of inductive, capacitive, and switch branch models depicted in FIGS. 4( a ), ( b ), and ( c ), respectively.
  • a wide variety of electrical circuits with different topologies can be modeled by appropriately setting the branch parameters.
  • Such a modeling approach also assumes that only a finite number of branches is allowed for representation of any particular circuit, and that all branches have lumped parameters.
  • Electrical networks satisfying this assumption can be modeled using a finite-dimensional state variable approach. Thereafter, it is possible to derive a finite-dimensional system of ordinary differential equations (ODEs) and algebraic equations (AEs) that would portray the dynamic behavior of currents and voltages for any such circuit.
  • ODEs ordinary differential equations
  • AEs algebraic equations
  • a class of electrical networks that permits only proper commutation and posses a finite dimensional system of differential algebraic equations (DAEs) can be defined as a class of finite networks.
  • DAEs differential algebraic equations
  • Such a class of finite electrical networks is denoted as N q ⁇ + ⁇ where ⁇ and ⁇ are the number of state variables in the systems of ODEs for the inductive and capacitive networks, respectively, and q is the number of branches used to represent the layout of the network.
  • ⁇ and ⁇ are often called the network complexity. Thus, hereafter only proper and finite electrical networks from the class N a ⁇ + ⁇ are considered.
  • a network N can be defined by its topological configuration, which is best described by the associated graph denoted as G, and a set containing branch parameters denoted as P.
  • a particular switch branch may be identified as active or inactive depending on its state. Therefore, in order to specify N completely, a topological vector s, which would contain information regarding whether each branch is active or inactive, should be added.
  • a topological state vector s would have ones in places corresponding to all currently active branches, and zeros for the remaining branches which were identified as inactive for the current topology.
  • a network of a general kind is an object from the class N a ⁇ + ⁇ and is defined as a triple
  • N ( G,P,s ) (2.1)
  • any single branch of the overall network N must be included in some loop with other branches so as to ensure the existence of a closed path for the current flow.
  • Graphs with such a circuit property are referred to as being closed.
  • a reference direction (from ⁇ to +) is assigned to each branch with respect to the two nodes at its endpoints.
  • a network N in the most general case, may be composed of several galvanically independent circuits. Therefore, a directed multi-graph denoted by G d g can be associated with network N.
  • Such a multi-graph may consist of g closed subgraphs. That is
  • G d g ⁇ G d 1 , G d 2 , . . . G d k , . . . G d g ⁇ (2.2)
  • each k-th closed subgraph G d k corresponds to its circuit.
  • the associated undirected graph or subgraph which can be denoted at G g and G k , and obtained by omitting the information regarding the reference direction of each branch. In a closed undirected graph, any branch must be a member of some cycle with more than one branch.
  • the directed multi-graph G d g consists of a total of q branches from the branch set
  • N ⁇ n 1 , n 2 , . . . , n i , . . . n p ⁇ (2.4)
  • Branches and nodes appear in (2.3) and (2.4) in a definite order which is given by their respective subscript indices. Using such a representation, it is possible to retrieve a particular branch or a node by referring to the respective ordered set, B or N, with an appropriate index-pointer.
  • the network N has a changing topology.
  • some branches may be inactive and some nodes may be unused.
  • other branches may switch on and off including different nodes, thus defining a new topology.
  • the number of currently active branches may be denoted as q′ and the number of currently used nodes as p′; whereupon the number of inactive branches and nodes may be expressed as q-q′, and p-p′, respectively.
  • q′ the number of currently active branches
  • p′ the number of currently used nodes
  • the set of forest branches B y is the subset of the global branch set, that is B y ⁇ B.
  • a graph on a computer There are several ways to represent a graph on a computer. Some methods may take advantage of the sparsity of the interconnection matrix and are, therefore more efficient than others in terms of the memory required to store a particular graph.
  • a less efficient but algebraically convenient method is to represent a graph in matrix form.
  • a node incidence matrix A f for the multi-graph has p rows and q columns (one row for each node and one column for each branch, all ordered). Even though this matrix never has full rank, it can be referred to as a full node incidence matrix meaning that it is not reduced to the size of its rank.
  • This matrix is conveniently formed from positive and negative node incidence matrices as
  • this matrix can be updated for each topology such that for each currently inactive branch b j the corresponding j-th column of A f is replaced with zeros.
  • the transpose of A f is also known as the adjacency matrix of a graph. For large graphs, these matrices are quite sparse and, therefore, may be stored using techniques optimized for sparse matrices.
  • each connected subgraph of the network N contributes exactly one zero-row to A RREF .
  • the rank of A f is always the number of active nodes less the number of connected subgraphs, which can be written as
  • the parameters of the circuit can be conveniently arranged into parameter matrices denoted as R br , G br , L br , and C br .
  • these square matrices would have dimensions equal to the total number of branches in the global network to be modeled, and would contain the parameters of each branch in the corresponding diagonal entry of each matrix. If some branches are coupled through mutual inductances or mutual capacitances, then those mutual parameters are represented in the off-diagonal entries of L br or C br corresponding to these branches. In the same way, mutual resistances or mutual conductances can be modeled between branches. Even though such quantities might not have physical meaning similar to mutual inductances or mutual capacitances, they can be employed for simulation purposes.
  • the corresponding entries in the parameter matrices are filled with zeros.
  • the external independent voltage and current sources are also represented as vectors denoted as e br and j br . These vectors would have voltages and currents in those entries corresponding to the branches with respective sources and zeros elsewhere.
  • the network parameters will not depend on the currents and voltages applied to the branches. In other circuits the parameters will vary with time, so the parameter matrices would also become matrix functions of time R br (t), G br (t), L br (t), and C br (t). For simplicity of notation herein, such time dependence will not be written explicitly. Also, since inductors and capacitors are energy storage components, in addition to their time dependence, their total derivatives with respect to time must also be known. Thus, the derivatives of time-varying inductances and capacitances are additional inputs into the model of the energy transformation and exchange processes in the circuit.
  • the network parameter matrices and the vectors of voltage and current sources can be grouped to form a parameter set for the global network, which can be defined as
  • P L ⁇ R br , L br , ⁇ L br ⁇ t , e br ⁇ ( 2.11 )
  • P C ⁇ G br , C br , ⁇ C br ⁇ t , j br ⁇ ( 2.12 )
  • Subdivision of the parameter set P corresponding to the global network N into subsets (2.11), (2.12), and (2.13) has the following goal.
  • a network of this kind does not have energy storage element such as inductors or capacitors. Modeling such a network would require solving a system of AEs relating branch voltages and currents to external sources through the network parameters and topology. From the nature of the equation that needs to be solved in order to obtain all branch currents and voltages, such networks are referred to as being algebraic and denoted as N A .
  • networks whose parameters are placed in the set P L have inductive elements which may store energy in the magnetic field.
  • a network of this type possesses a system of ODEs, or more precisely a state equation, whose natural state variable may be inductor currents or flux linkages.
  • networks whose parameters are placed in PC have capacitors that may store energy in an electric field, and therefore, posses ODEs whose natural state variables may be capacitor voltages or charges. These two networks are referred to as inductive and capacitive, respectively.
  • N L , N C , and N A three types of elementary electrical networks, namely inductive, capacitive, and algebraic, denoted as N L , N C , and N A , respectively.
  • N L , N C , and N A three types of elementary electrical networks, namely inductive, capacitive, and algebraic, denoted as N L , N C , and N A , respectively.
  • N L , N C , and N A three types of elementary electrical networks, namely inductive, capacitive, and algebraic, denoted as N L , N C , and N A , respectively.
  • N N L ⁇ N C ⁇ N A (2.15)
  • some networks may or may not be present in (2.15).
  • a graph whose full node incidence matrix is A f has det(A f A f T ) total number of spanning trees.
  • One way to simplify this task is to associate appropriate weights with the branches and convert the problem into finding a spanning tree with the minimized/maximized sum of such weights. This approach is known in network optimization as the minimum/maximum spanning tree problem.
  • the branch weights should be relatively simple so as to promote good performance as well as to be able to prove certain useful properties.
  • the weights can be assigned to network branches based on their respective parameters (2.10). This method of obtaining a spanning tree and a set of links, with some desired property, will be utilized in the present, exemplary embodiment for the purpose of automated modeling.
  • any proper forest of spanning trees would suffice. Having performed such graph partitioning, the set of tree-branches B y is identified. Thereafter, the set of remaining link-branches B x can be determined as
  • the new branch order defined in (2.17) can be applied to the columns of A f .
  • the new branch order can be related to the original branch order through a permutation matrix which is denoted as T p .
  • T p permutation matrix
  • Multiplying A f from the right by T p results in a matrix whose columns are ordered to correspond to the branch set B in (2.17). That is, the permutation matrix T p should sort the columns of A f such that
  • This permutation matrix can be assembled from an identity matrix by sorting its columns at the same time as the branches in (2.17). Note that the multiplication from the right by T p T performs the reverse column permutation and restores the original column order. That is
  • the hat sign above the matrix denotes that the corresponding matrix or vector quantity is referred to a branch order different that the original order.
  • TCF topological canonical form
  • the TCF is one of the key concepts used herein to describe networks. From A TCF , the reduced incidence matrix ⁇ a and the so called basic loop matrix ⁇ circumflex over (B) ⁇ b are obtained as
  • KCL and KVL written for the whole network have the usual form
  • î br [i y ,i x ] (2.30)
  • î br ( ⁇ circumflex over (B) ⁇ b ) T i x (2.32)
  • the vector of re-sorted branch voltages can be expressed in terms of the tree voltages as
  • a class of finite electrical networks N q ⁇ + ⁇ is considered.
  • the two types of networks that can be modeled using a state variable approach are inductive and capacitive. Equipped with techniques based on a topological search for an appropriate forest of spanning trees, the conditions under which a corresponding state equation can be assembled are also considered.
  • An inductive network can be built using branches of the type depicted in FIG. 4( a ). Assuming that all switches are active, a network of this type is given as (2.80)
  • N L ( G,P L ) (2.80)
  • subset B y For a given topology of N L , the branches must be re-sorted into subsets as in (2.17).
  • the subset B y would collect branches that form spanning trees for all subgraphs. Each such tree is free of cycles, and covers all active nodes in its subgraph.
  • the second category, denoted as subset B x takes the link-branches. These branches are the links in a sense that addition of any of them to the spanning tree would result in a cycle.
  • These branches can carry state variables—independent currents—and since their number is minimal for each spanning tree, they form a minimal set of states for the N L .
  • the following weight function w L (b) is defined as
  • MinSTA minimum spanning tree algorithm
  • N L has no non-inductive loops if and only if there is a set B x in which the number of branches equals the sum of their weights
  • the set B x need not be unique, but any such set B x , satisfying (2.82) is equivalent in a topological sense. That is, an arbitrary set of branches (that can be larger than B x ) may be chosen to represent state variables, such as independent currents, in N L if and only if it contains a set B x satisfying topological condition (2.82). If however, (2.82) cannot be met, it follows that the given network is more than just a single inductive network. An algorithm for handling more than one network will be presented in later sections.
  • the corresponding state equation is obtained using the dimensionality reduction procedure discussed in Wasynczuk and Sudhoff.
  • the procedure may be as follows.
  • the voltage equation written for the network is multiplied from the left by the corresponding matrix of KVL and all vectors of branch currents are replaced with (2.83). The result is
  • ⁇ i x ⁇ t - L x - 1 ⁇ ( R x + ⁇ L x ⁇ t ) ⁇ i x - L x - 1 ⁇ B b ⁇ e br ( 2.84 )
  • a capacitive network with augmented topology can be defined using the corresponding parameter set and an associated graph as
  • N C ( G,P C ) (2.85)
  • a slightly different topological approach might be used.
  • a MaxSTA with a different weight function can be applied in order to find maximum spanning trees. This time, the weight function w C (b j ) is defined such that
  • the network N C should not have non-capacitive tree-branches. Such a condition is satisfied if and only if there is a branch set B y for which the following is true
  • Condition (2.87) is also necessary and sufficient, and therefore, the discussion thereof applies to this case as well.
  • the set B y need not be unique, but all such sets for which (2.87) holds are equivalent in a topological sense. If the condition (2.87) cannot be met, it can be shown that the corresponding network is not just N C but a union of the form N C ⁇ N A . However, for the purpose of this section, a single capacitive network is considered.
  • N C The natural state variables for N C are the capacitor voltages. Therefore, a vector of state variables v y can be chosen to be a vector of independent capacitor voltages such that
  • ⁇ v y ⁇ t - C y - 1 ⁇ ( G y + ⁇ C y ⁇ t ) ⁇ v y + C y - 1 ⁇ A a ⁇ j br ( 2.89 )
  • G y A a ⁇ G br ⁇ A a T ( 2.90 )
  • C y A a ⁇ C br ⁇ A a T ( 2.91 )
  • C y ⁇ t A a ( ⁇ C br ⁇ t ) ⁇ A a T ( 2.92 )
  • K C in (2.100) can be any non-singular matrix, it is always possible to choose it in a way that the new state variables have particular physical significance. Similar to N L where the states can be transformed from currents i x to fluxes ⁇ x , in the case of N C the states may be transformed from voltages v y to charges q y by an appropriate choice of transformation. In particular, by defining the transformation as
  • N L or N C cannot be structurally represented entirely only as N L or as N C .
  • an attempt to represent the network N structurally entirely as N L fails if there is a shortage of inductive link branches in the set B x .
  • the network N cannot be viewed only as N C if the corresponding forest of spanning trees lacks capacitive branches in the set B y .
  • the global network cannot be viewed entirely as a single type of a network, and therefore, a more general approach is required.
  • a method of handling networks with arbitrary topologies is presented here based on the separation of N into interconnected networks. That is, if it is not possible to represent an entire circuit as a network of a single kind, it is necessary to identify some cuts of N that can be grouped together to form several networks based on their topological properties. Separation of N into N L N C , and N A in a way such that it is possible to obtain consistent state equations for N L , N C , and a system of algebraic equations for N A is, therefore, a generalization of the ASMG approach for circuits with arbitrary topologies. In this sense, a robust state selection algorithm is a key to such a generalization.
  • the technique of network identification and partitioning introduced here is based on the TCF of the node incidence matrix assembled for the global network.
  • the TCF makes topological information about N available in an “algebraic” sense for the further use in KCLs and KVLs.
  • the topological quantities referred to a particular branch order, such as in (2.17) will be distinguished by the hat sign, keeping in mind that it is always possible to transform them back to the original order using a corresponding permutation matrix.
  • the superscripts “L”, “C”, and “A”, and the combinations thereof would be employed to relate variables and quantities to the inductive, capacitive, algebraic, and the overlapping networks, respectively.
  • N L in will be shown how an algebraic network can be identified and how the corresponding system of algebraic equations can be assembled. It will also be shown that even in the presence of N A , a minimal and consistent state equation for N L can still be obtained following the same dimensionality reduction procedure set forth in Wasynczuk and Sudhoff. Then, similar derivations will be repeated for N C , in a somewhat simplified form, using its structural duality with respect to the inductive case. Finally, a way of obtaining a consistent system of DAEs relating all networks will be presented using the established framework.
  • N (G, P L ) is constructed using branches shown in FIGS. 4A and 4C .
  • MinSTA with weight function w L (b) is applied, and that in the end (2.82) does not hold. This implies that there is no way the set of links B x can be chosen to contain only inductive branches, and
  • a f T L [A trees L ,A trees A ,A links A ,A links L ] (3.7)
  • the superscript “L” denotes that the corresponding branches can be safely placed into inductive network N L
  • the “A” identifies all other non-inductive branches, as viewed from N L , that belong to N A .
  • a TCF [ I ⁇ ⁇ ⁇ 0 0 A ⁇ ⁇ ⁇ h L 0 I ⁇ ⁇ ⁇ A ⁇ ⁇ ⁇ m A C ⁇ ⁇ ⁇ h A ] ( 3.8 )
  • h is the number of inductive link-branches in the set B x , as defined in (3.5);
  • m is the number of non-inductive link-branches in B x , as defined in (3.6);
  • is the number of tree-branches in the set B y that are linked by the h inductive link-branches in B x .
  • is the number of tree-branches in B y that are linked by the m non-inductive link-branches from B x .
  • the reduced node incidence matrix and the basic loop matrix for the global network N are found as usual
  • a ⁇ a A TCF ( 3.9 )
  • B ⁇ b ⁇ 0 m ⁇ ⁇ - ( A ⁇ ⁇ ⁇ m A ) T I m ⁇ m 0 m ⁇ h - ( A ⁇ ⁇ ⁇ h L ) T - ( C ⁇ ⁇ ⁇ h A ) T 0 h ⁇ m I h ⁇ h ⁇ ( 3.10 )
  • KVL for N A can be derived as
  • KCL (3.11) is a self-contained equation for N L .
  • KVL (3.12) contains only quantities relevant to N A . In this sense, these two equations are de-coupled.
  • ⁇ i x ⁇ t - L x - 1 ⁇ ( R x + ⁇ L x ⁇ t ) ⁇ i x - L x - 1 ⁇ B b L ⁇ e br L - L x - 1 ⁇ C b LA ⁇ v br A ( 3.18 )
  • R br T L T ⁇ R br ⁇ T L (
  • N A branch voltages for N A should be known. These voltages are functions of internal topology, parameters, and external sources. In general, it is necessary to compute both currents and voltages for all branches in N A .
  • KVL (3.12) and KCL (3.17) are utilized. In particular, suppose the currents of N A first, and then the branch voltages. However, in (3.17) there are fewer equations than branches in N A . Specifically, (3.17) provides ⁇ equations with m+ ⁇ unknowns. That is, there should be m more equations, precisely one for each cycle in N A .
  • N A has its own voltage equation, which may be expressed in terms of the branch order given by the columns of (3.7) as
  • the total number of non-capacitive tree-branches in the forest of maximum spanning trees can be determined as
  • TCF can be obtained by taking the RREF of (3.28) and removing zero rows from the bottom. This TCF has the following structure
  • a TCF [ I ⁇ ⁇ ⁇ 0 D ⁇ ⁇ ⁇ t CA A ⁇ ⁇ ⁇ z C 0 I ⁇ ⁇ ⁇ A ⁇ ⁇ ⁇ t A 0 ] ( 3.29 )
  • t is the number of non-capacitive link-branches in the set B x .
  • is the number of capacitive tree-branches in the set B y as defined in (3.26).
  • is the number of non-capacitive tree-branches in the set B y as defined in (3.27).
  • a a A TCF ( 3.30 )
  • B ⁇ b ⁇ - ( D ⁇ ⁇ ⁇ t CA ) T - ( A ⁇ ⁇ ⁇ t A ) T I t ⁇ ⁇ 0 ⁇ ( A ⁇ ⁇ ⁇ z C ) T 0 0 I z ⁇ z ⁇ ( 3.31 )
  • KCLs and KVLs can be written for the two networks. That is, writing KCL using (3.30),
  • î br A ⁇ br A ⁇ circumflex over (v) ⁇ br A ⁇ br A (3.40)
  • ⁇ v y ⁇ t - C y - 1 ⁇ ( G y + ⁇ C y ⁇ t ) ⁇ v y + C y - 1 ⁇ A a ⁇ j br - C y - 1 ⁇ D a CA ⁇ i br A ( 3.42 )
  • both types of network namely N L and N C
  • N L and N C have been considered with a shortage of branches that can carry state variables.
  • the corresponding minimal state equations (3.18) and (3.42) were completed by adding extra source terms that came about due to the algebraic part of the corresponding network.
  • N L and N A are considered as was done in the beginning of this chapter, whereupon N C is incorporated into the existing structure. In doing so, capacitor voltages of N C can be mapped into e br L and v br A in (3.18).
  • a second approach would consist of adding N L to the structure developed for the N C and N A , and mapping inductor currents into j br C and i br A in (3.42). Since both methods yield equivalent results, choosing either one is a matter of pure convenience. Following the order in which the material was presented earlier, preference is given to the first approach.
  • the capacitive branches corresponding to any of the columns in A trees L represent the part of N C that overlaps with N L . As a result of such overlapping, each of such capacitive branches is going to have its own independent state variable within N C .
  • the branch voltages corresponding to such capacitive branches can be viewed as independent voltage sources e br L present in (3.18).
  • the remaining capacitive branches are viewed as a part of an algebraic network for N L , and therefore, are going to be represented in the columns of blocks A trees A and A links A as given in (3.7).
  • the challenge is to reorder columns (branches) in (3.7) taking into consideration the capacitive network.
  • the procedure of reordering columns is very similar to the two previous cases.
  • all columns corresponding to the non-capacitive branches in A trees L are identified and placed on the left side of this block.
  • the trees of capacitive branches that form a capacitive network need to be separated.
  • the MaxSTA with weight function w C (b) is applied to the branches in A trees A and A links A .
  • the columns of this block are again sorted similar to (3.28).
  • the final branch ordering with corresponding permutation matrix may be expressed as
  • a f T LCA [A trees L ,A trees LC ,A trees C ,A trees A ,A links A ,A links CA ,A links L ] (3.47)
  • the permutation matrix T LCA in (3.47) sorts branches of the global network N in groups with very specific topological properties corresponding to different networks. Again, taking the RREF of (3.47) and removing the zero rows, a TCF with the following structure is produced
  • a TCF [ I ⁇ ⁇ ⁇ L 0 0 0 0 0 A ⁇ ⁇ ⁇ h L 0 I ⁇ ⁇ ⁇ LC 0 0 0 0 A ⁇ ⁇ ⁇ h LC 0 0 I ⁇ ⁇ ⁇ CA 0 D ⁇ ⁇ ⁇ t CA A ⁇ ⁇ ⁇ z C C ⁇ ⁇ ⁇ h LC 0 0 0 I ⁇ ⁇ ⁇ A A ⁇ ⁇ ⁇ t A 0 C ⁇ ⁇ ⁇ h LA ] ( 3.48 )
  • is the number of non-capacitive tree-branches in the set B y that are also placed in N L .
  • is the number of capacitive tree-branches in B y that are placed in N L and N C .
  • is the number of capacitive tree-branches in B y that are placed in N C .
  • is the number of non-capacitive tree-branches in B y that are placed in N A .
  • t is the number of non-capacitive link branches in B x , that are placed in N A .
  • z is the number of capacitive link-branches in B x , that are placed in N C .
  • h is the number of inductive link-branches in the set B x that are placed in N L .
  • ⁇ A a A TCF ( 3.49 )
  • B ⁇ b [ ⁇ 0 0 - ( D ⁇ ⁇ ⁇ t CA ) T - ( A ⁇ ⁇ ⁇ t A ) T I t ⁇ t 0 0 0 0 - ( A ⁇ ⁇ ⁇ z C ) T 0 0 I z ⁇ z 0 - ( A ⁇ ⁇ ⁇ h L ) T - ( A ⁇ ⁇ ⁇ h LC ) T - ( C ⁇ ⁇ ⁇ h LC ) T - ( C ⁇ ⁇ ⁇ h LA ) T 0 0 I h ⁇ h ⁇ ] ( 3.50 )
  • KVL can be written in the following way. First, for the capacitive network
  • the vector of state variables for N L is selected as a vector of independent currents such that
  • the vector of states for N C is chosen to be a vector of independent capacitor voltages such that
  • KCL (3.67) written for N C and KVL (3.70) written for N L are coupled through the corresponding interconnection matrices that are related to each other as
  • N A contains non-empty resistive branches that are modeled as depicted in FIGS. 4( a ) and ( b ), with voltage or current sources, respectively.
  • each branch in B is a member of some cycle composed of more than one branch (self-loops are not allowed)
  • the entire branch set B can be partitioned into two subsets B y and B x such that they have no branches in common, set B y has a minimal size and spans the entire node set N.
  • G trees (N,B y ) is a spanning tree. If G is a multi-graph, then G T is a spanning forest (forest of spanning trees). Also, based on the parameter set P, the weight functions ⁇ L (2.81) and ⁇ C (2.86) must be defined over the entire branch set B. Then, depending on the order in which the elementary networks are to be found, a network identification procedure can be formulated in four major steps. Two procedures will now be discussed for the different order of network identification.
  • Step 1 Call the Minimum Spanning Tree Algorithm
  • the branch set B y can be partitioned into the following sets
  • this branch set may be partitioned further as
  • branches ⁇ tilde over (B) ⁇ CA (3.83) forms a closed graph
  • the remaining branches ⁇ tilde over (B) ⁇ y LC ,B x L ⁇ do not, unless ⁇ tilde over (B) ⁇ CA and ⁇ tilde over (B) ⁇ y LC , B x L ⁇ are sets of galvanically disjointed branches.
  • Step 3 The Maximum Spanning Tree Algorithm is Applied
  • the set of link-branches is retrieved as
  • B x C This branch set actually may or may not have capacitive branches, but all of its branches are links to capacitive trees in B y C 2 .
  • the remaining link-branches can be found as
  • the branch order established in (3.92) corresponds to the TCF (3.48) with the block matrices having dimensions of their respective sets in (3.93)-(3.95).
  • Step 1 The Maximum Spanning Tree Algorithm is Applied
  • Step 2 All link-branches in B x corresponding to trees in ⁇ tilde over (B) ⁇ y LA are found. This set of link-branches is denoted as ⁇ tilde over (B) ⁇ x LA .
  • the combined set
  • B x LC branches of B x LC .
  • these inductive link-branches can be identified and placed into set denoted B x L 1 . After that, B x LC can be re-sorted as
  • B x C includes the remaining non-inductive link-branches that may or may not be capacitive.
  • Step 3 The Minimum Spanning Tree Algorithm is Called
  • Step 4 B x L 2 is removed from the branch set ⁇ tilde over (B) ⁇ LA .
  • B y LA can be partitioned as
  • the elementary networks are formed based on the branch sets as follows
  • the TCF of the node incidence matrix has the following structure
  • a TCF [ I ⁇ ⁇ ⁇ C 0 0 D ⁇ ⁇ ⁇ t CA D ⁇ ⁇ ⁇ h L 2 A ⁇ ⁇ ⁇ k L 1 A ⁇ ⁇ ⁇ z C 0 I ⁇ ⁇ ⁇ L 0 0 A ⁇ ⁇ ⁇ h L 0 0 0 I ⁇ ⁇ ⁇ A A ⁇ ⁇ ⁇ t A C ⁇ ⁇ ⁇ h LA 0 ] ( 3.112 )
  • the previous procedures are the techniques for sequential identification of elementary networks. Depending on the relative order in which the networks are identified from the global graph, the sequence of steps in the procedure may differ. Similar procedures may be constructed in which the order of elementary network identification is different from the two cases considered above. However, as it is expected, the results of such procedures are topologically equivalent.
  • N N L ⁇ N A (3.113)
  • N N C ⁇ N A (3.114)
  • FIGS. 4( a ) and ( b ) were considered at the same time, it was shown that the circuit can be consistently described by viewing the corresponding global network as being partitioned as
  • N ( N L ⁇ N LC ⁇ N C ) ⁇ N A (3.115)
  • N N L ⁇ N C ⁇ N A (3.116)
  • the network should be partitioned in such a way that the number of variables coupling the corresponding systems of DAEs is minimized. If such a goal is feasible and the number of state variables in each of the DEs is significantly larger than the number of coupling variables, the corresponding networks may be viewed as being weakly coupled. In terms of the topology of weakly coupled networks, it is reasonable to expect that the number of common or connecting branches is small.
  • N N 1 ⁇ N 2 (3.120)
  • the columns in the node incidence matrix would also be reorganized such that the corresponding TCF would have a particular block structure needed to assemble the KCL and KVL matrices on a network-by-network basis.
  • TCF two types of structures of the right hand side of TCF have been heretofore encountered: lower-block triangular as in (3.8); and upper-block triangular as in (3.29).
  • lower-block triangular TCF may be considered.
  • the KCL and KVL matrices would have the following form
  • B b [ 0 - ( A 2 ) T I 2 0 - ( A 1 ) T - ( A 21 ) T 0 I 1 ] ( 3.122 )
  • the corresponding TCF can be expressed in the general lower block-triangular
  • a TCF [ I 1 ⁇ A 1 I 2 ⁇ A 2 A 21 ⁇ ⁇ ⁇ ⁇ I k ⁇ A k ⁇ A k ⁇ ⁇ 2 A k ⁇ ⁇ 1 ⁇ ⁇ ⁇ ⁇ ⁇ I n ⁇ A n ⁇ A nk ⁇ A n ⁇ ⁇ 2 A n ⁇ ⁇ 1 ] ( 3.124 )
  • the KCL matrix for the k-th network (that is, the self-KCL matrix) is defined as
  • the corresponding KCL coupling matrix relating the k-th and i-th networks can be written from the k-th row of the TCF as
  • the KVL matrices are defined in a similar way.
  • the KVL self-matrix for the k-th network is determined from (3.124) to be
  • the KVL matrix coupling the k-th and i-th networks is assembled as
  • the TCF (3.124) has a very general structure applicable to interconnected networks.
  • the KCLs (3.127) and KVLs (3.132) also reflect possible coupling among all networks in (3.123) through their branch currents and voltages. If many of the networks in (3.123) are mutually de-coupled or weakly coupled, it can be expected that many of the block matrices with double subscripts in (3.124) are zero. For instance, if it is possible to make the right side of the TCF (3.124) block-bi-diagonal, the corresponding networks in (3.123) would be sequentially connected. Another useful way of formulating the relations among the networks is to have one (or maybe several) specific networks that represent all interconnections. In terms of the right side of the TCF, an attempt would be made to form a block diagonal structure as far down as possible by appropriately selecting the networks.
  • N (G, P, S)
  • N L elementary networks
  • N C elementary networks
  • N A the network partitioning may be of a more advanced form such as (3.123) where the branches may be assigned to networks based on some additional constraints. For instance, it is reasonable to establish additional constraints such that some of the networks of (3.123) remain unchanged throughout the entire simulation study. The same goal can also be pursued on the basis of elementary networks.
  • each elementary network is partitioned into two smaller ones as
  • N ⁇ N ⁇ 1 ⁇ N ⁇ 2 (3.143)
  • N ⁇ may depend on time as well as on applied currents and voltages.
  • the equations corresponding to N ⁇ become nonlinear with time-varying coefficients. Therefore, it is desirable to partition the network as in (3.143) such that one of the networks, say N ⁇ 1 , takes all branches with nonlinear and/or time varying parameters. It may be necessary to include other branches in order to make N ⁇ 1 a proper network. After performing such network partitioning, it may become possible to assemble two systems of equations for N ⁇ 1 and N ⁇ 2 , respectively, such that each of them posses smaller dimensions.
  • an elementary network N ⁇ can be partitioned such that either N ⁇ 1 or N ⁇ 2 posses a system of equations that need not be reassembled for each new topology. Thereafter, an attempt is made to exclude all such networks from the equation assembling procedures that are performed at each switching instance. Separating the global network N into its switching and non-switching parts would not only reduce the total amount of computations required per change in topology, but also provide a means for the local averaging of state equations for the switched subnetwork as is known in the art.
  • the structure of the DAEs produced by ASMG may also be utilized for the non-impedance-based stability analysis of energy conversion systems represented by their equivalent circuits. That is, instead of relying on the linearization of the state equations and using Nyquist-type criteria, it is possible to generate the DAEs in a form suitable for the Lyapunov analysis. For instance, if needed, a change of variables could be used to rewrite the state equations in the following autonomous form.
  • ⁇ x L ⁇ t f L ⁇ ( x L ) + g L ⁇ ( x L , x C ) ( 3.144 )
  • g L (x L , x C ) and g C (x L , x C ) are the interconnection terms that also include the algebraic network.
  • the stability of (3.144)-(3.145) can be analyzed as similar to a problem of perturbed motion and studied using Lyapunov functions. However, for this technique, it is necessary to establish Lyapunov functions for each system in (3.144)-(3.145) ignoring the interconnections to begin with.
  • LMIs linear matrix inequalities
  • Such a network representation allows the corresponding systems of DAEs (3.72)-(3.75) to be assembled in a consistent manner for each elementary network with respective interconnections.
  • (3.72)-(3.75) can be written as
  • u ej [e br ,j br ] is an input vector that is composed of input voltage and current sources.
  • (4.1) is a first-order system of ODEs
  • (4.2) is a system of AEs.
  • the AE (4.2) has the form (3.23), (3.25), (3.39), (3.41) and (3.74)-(3.75). If the elements of N A are some known, possibly nonlinear functions of applied currents and voltages, then (4.2) may not have a unique solution. In general, (4.2) may be solved using iterative methods for solving systems of nonlinear equations.
  • the stage is set for assembling (4.4).
  • Expressions (4.6) and (4.7) already provide currents and voltages for the branches of N A .
  • N L and N C it is necessary to express currents and voltages for the branches of the two remaining networks, N L and N C .
  • N L a vector of corresponding branch currents is determined from independent currents as
  • a vector of branch voltages for the N L can be written as
  • v br L ( R br + ⁇ L br ⁇ t ) ⁇ ( B b L ) T ⁇ i x + L br ⁇ ( B b L ) T ⁇ ⁇ i x ⁇ t + e br L ( 4.13 )
  • I L is an identity-like matrix with ones in diagonal entries corresponding to branches in N L and zeros elsewhere.
  • a vector of branch currents for the N C can be found as
  • i br C ( G br + ⁇ C br ⁇ t ) ⁇ ( A a C ) T ⁇ v y + C br ⁇ ( A a C ) T ⁇ ⁇ v y ⁇ t - j br C ( 4.17 )
  • I C is also an identity-like matrix with ones in diagonal entries corresponding to branches in N C and zeros elsewhere.
  • N Assuming that vectors of currents and voltages for all branches of the global network N can be assembled (concatenated) from the corresponding currents and voltages of N L , N C , and N A as
  • [ i br v br ] [ ( B b L ) T + C C LA + D I A C C CA + D V A C C LA + C I A ( A a C ) T + C L CA + C V A ] ⁇ [ i x v y ] + [ D C A + D e A D C CA + D J A D L LA + C e A D L A + D j A ] ⁇ [ e br j br ] ( 4.21 )
  • network parameters may not only depend on time but also be some functions of state variables such as inductor currents and fluxes, and capacitor voltages and charges.
  • state variables such as inductor currents and fluxes, and capacitor voltages and charges.
  • a branch resistance may depend on the current flowing through or the voltage applied to the branch.
  • nonlinear magnetic properties are to be included in the model, the corresponding branch inductances become state dependent. This phenomena is caused by the saturation of the magnetic materials used in inductors.
  • the effective capacitance of the junction may also depend on voltage or current depending upon the type of device considered. Therefore, it should be considered that
  • ⁇ CA represents all terms that are due to interconnection of N L with capacitive and algebraic networks.
  • ⁇ br is a vector-valued saturation function. From this point, (4.29) is substituted into (4.27) and the dimensionality reduction procedure using KVL (4.28) is applied to the resulting voltage equation. In doing so, the vector of independent fluxes is found as follows
  • ⁇ x (i x ) is the reduced saturation function
  • Equations (4.30)-(4.31) form a system of DAEs for the network N L .
  • iterative methods would be used for solving (4.30) for the vector of currents.
  • Newton's method with an initial guess taken as i x from the previous integration step could yield fast convergence.
  • i x can be chosen to be the state vector in order to avoid having to solve a system of nonlinear equations.
  • the time derivative of fluxes in (4.30) can be expressed as
  • the vector of branch currents is computed as usual using (4.11), and (4.28) can be used to compute branch voltages. Applying the chain rule to (4.30) and utilizing (4.11), the derivative of branch fluxes can be expressed as
  • ⁇ ⁇ br ⁇ t ( L br - ⁇ ⁇ br ⁇ i br ) ⁇ ( B b L ) T ⁇ ( ⁇ i x ⁇ t ) + ⁇ L br ⁇ t ⁇ ( B b L ) T ⁇ i x ( 4.35 )
  • v br L ( L br - ⁇ ⁇ br ⁇ i br ) ⁇ ( B b L ) T ⁇ ( ⁇ i x ⁇ t ) + ( R br L + ⁇ L br ⁇ t ) ⁇ ( B b L ) T ⁇ i x + e br L ( 4.36 )
  • the nonlinear relation between capacitor charges and voltages can be represented in a form similar to magnetic saturation. That is
  • ⁇ br is also a vector-valued function representing a part of capacitance that depends on the voltage.
  • the vector of branch currents is computed based on (4.37) and (4.41) as
  • i br C ( C br - ⁇ ⁇ br ⁇ v br ) ⁇ ( A a C ) T ⁇ ( ⁇ v y ⁇ t ) + ( G br C + ⁇ C br ⁇ t ) ⁇ ( A a C ) T ⁇ v y - j br C ( 4.44 )
  • M L (t) and M C (t) are so-called mass matrices that can dependent on time and state.
  • y br L g L ⁇ ( ⁇ i x ⁇ t , i x , e br L , t ) ( 4.49 )
  • y br C g C ⁇ ( ⁇ v y ⁇ t , v y , j br C , t ) ( 4.50 )
  • Equations (4.45)-(4.50) form a system of DAEs which describe the global network N.
  • (4.45) is a system of nonlinear equations that could be solved numerically.
  • (4.46) and (4.49)-(4.50) are explicit systems of AEs that are not expensive to evaluate.
  • the implicit systems of DEs (4.47)-(4.48) may be solved for the respective derivatives using some efficient techniques developed for systems of linear equations. In such arrangements, a linear solver would be called at each call to the corresponding derivative function.
  • there are efficient numerical techniques that have been designed to accommodate time dependent mass matrices.
  • N (G, P, s), where s is some given topology. If N is a switched network, the vector s is different for each new topology. In order for the constructed model to be meaningful, N should be a finite electrical network for each encountered topology. Throughout the simulation process, the network N may switch sequentially from an initial topology s 0 to the final topology s, say as
  • each distinct topology of (5.1) there is a minimal realization for the inductive and capacitive networks present in N.
  • these minimal realizations may have a different number of states for each topology. That is, for a given topology s i of the sequence (5.1), the state vectors of N L and N C belong to some appropriate vector spaces given topology s i of the sequence (5.1), the state vectors of N L and N C belong to some appropriate vector spaces
  • max ⁇ 0 , ⁇ 1 , . . . , a i , . . . , ⁇ j , . . . , ⁇ m ⁇ (5.7)
  • max ⁇ 0 , ⁇ 1 , . . . , ⁇ i , . . . , ⁇ j , . . . , ⁇ m ⁇ (5.8)
  • N ( G,P,S ) ⁇ N q ⁇ + ⁇ (5.9)
  • the sequence (5.1) may contain repeated topologies, and the actual number of distinct topologies may be small relative to the length m of the complete sequence of topologies throughout the entire simulation history.
  • a reduced topology matrix that includes only the distinct topological vectors may be defined. Without loss of generality, it can be assumed that in the sequence s 0 , s 1 , . . . , s i , . . . s j , . . . s r , . . . , s m for some r ⁇ m, the first r+1 vectors are distinct. Thereafter, a reduced topology matrix is defined as
  • i br L and v br C must be bounded and continuous across topological boundaries. Recalling how i br L and v br C are related to the vectors of independent inductor currents and capacitor voltages, (5.11)-(5.12) can also be rewritten as
  • the KVL and KCL matrices corresponding to the second topology s i+1 can also be expressed in terms of the branch order in the respective TCF as
  • a i+1 L I i+1 C ⁇ i+1 L 0 ⁇ ( T C i+1 ) T (5.16)
  • T L i+1 and T C i+1 are appropriate permutation matrices. Based on the structure of (5.15)-(5.16), it is possible to define the following right pseudo inverses
  • B i+1 base and A i+1 base are full-rank matrices containing only columns that make the basis in either case. Therefore, (5.17)-(5.18) can be used to compute the corresponding initial conditions for the new topology.
  • the initial values of the independent currents and voltages for the topology s i+1 can be computed as
  • the values in the system are constrained to represent finite currents and voltages.
  • the total number of rows for A a C and for B b L is then determined by the maximum number of state variables (5.7)-(5.8) needed to represent networks N L and N C for any topology in the sequence s 0 , s 1 , . . . , s i , . . . , s m .
  • ⁇ a C and for ⁇ tilde over (B) ⁇ b L using in the dimensionality reduction procedures for N L and N C , respectively, the reduced matrices L x and C y will be block-diagonal, with full-rank upper-left blocks and zeros elsewhere. Therefore, having such a structure, these matrices are also block-invertible.
  • the same block-diagonal structure is preserved under any non-singular coordinate transformation applied to the state variables in (2.106) or (2.111).
  • a coordinate transformation matrix can be defined as
  • Mx ⁇ is an additional vector of redundant states. That is, any element of this vector is nothing more than some linear combination of the state variables in x ⁇ . Then, defining the right pseudo inverse of K ⁇ to be of the form
  • the continuity of inductor currents and capacitor voltages across topological boundaries results in continuity of respective branch currents and voltages as expressed in (5.11)-(5.12).
  • condition (5.47) does not guarantee continuity of state variable across the topological boundaries, and ⁇ tilde over (x) ⁇ ⁇ i ⁇ C.
  • the first step in deriving the state selection algorithm subject to ⁇ tilde over (x) ⁇ ⁇ ⁇ C is to define the change of variables (5.39)-(5.41) for two adjacent generic topologies s i and s i+1 , and enforce (5.47)-(5.48). To achieve this, it is necessary to relate state vectors x ⁇ i and x ⁇ i+1 through some matrices such that
  • the state variables are selected to be inductor currents and capacitor voltages; whereas the dimensions of respective state vectors are kept constant by utilizing (5.36)-(5.42). Then the state variables for the instance of commutation between two topologies can be related as
  • the second step in deriving an algorithm for global state space continuity consists of applying results of the first step recursively to all topologies of the sequence (5.1).
  • network N (G, P, s i ) is modeled using 4 transformation of variables of the form
  • the transformation matrices (5.56)-(5.57) can be used in (5.42) in order to achieve ⁇ tilde over (x) ⁇ ⁇ ⁇ C.
  • the same transformation of variables can be applied to the more general system of DAEs (4.10), (4.21), and (4.45)-(4.50), in order to obtain global continuity of state variables.
  • the state selection method discussed herein provides a convenient structure of DAEs for the ASMG.
  • a brute-force implementation of the ASMG involving sparse topological matrices together with sparse parameter matrices would result in very long simulation run times in the case of electrical networks with time-varying and/or non-linear parameters and a large number of branches. Therefore, an efficient numerical implementation of the ASMG-generated DAEs is a very important issue that will now be addressed.
  • N (G, P, S)
  • each network branch it is possible to assemble an array that records the loop numbers corresponding to the loops in which this branch participates. Such an array represents a loop participation set for the given branch.
  • arrays of cutset branches corresponding to each tree-branch, and arrays of cutset participations for all branches.
  • These compact arrays of loop (loop participation) and cutsets (cutset participation) sets can be formed on a network-by-network basis. As it will be shown, assembling DAEs and computing their terms using arrays of branch sets avoids unnecessary operations and, therefore, significantly reduces the computational complexity.
  • v br L R br L ⁇ i br L + ⁇ L ⁇ t ⁇ i br L br + L br ⁇ ⁇ i br L ⁇ L + e br L ( 6.1 )
  • v br L ⁇ ( k ) ⁇ l ⁇ B L ⁇ R br L ⁇ ( k , l ) ⁇ i br L ⁇ ( l ) + ⁇ l ⁇ B L ⁇ ⁇ L ⁇ t ⁇ br ⁇ ( k , l ) ⁇ i br L ⁇ ( l ) + ⁇ l ⁇ B L ⁇ L br ⁇ ( k , l ) ⁇ ⁇ t ⁇ i br L ⁇ ( l ) + e br L ⁇ ( k ) ( 6.2 )
  • v br L ⁇ ( k ) ⁇ i ⁇ M k R ⁇ R br L ⁇ ( k , l ) ⁇ i br L ⁇ ( l ) + ⁇ m ⁇ M k Li ⁇ ⁇ L ⁇ t ⁇ br ⁇ ( k , m ) ⁇ i br L ⁇ ( m ) + ⁇ n ⁇ M k L ⁇ L br ⁇ ( k , n ) ⁇ ⁇ ⁇ t ⁇ i br L ⁇ ( n ) + e br L ⁇ ( k ) ( 6.3 )
  • the vector of branch voltages v br L can be computed using (6.3) with reduced expense compared to (6.2).
  • i br C G br C ⁇ v br C + ⁇ C br ⁇ t ⁇ v br C + C br ⁇ ⁇ v br C ⁇ t - j br C ( 6.4 )
  • i br C ⁇ ( k ) ⁇ l ⁇ M k C ⁇ G br C ⁇ ( k , l ) ⁇ i br L ⁇ ( l ) + ⁇ m ⁇ M k Ct ⁇ ⁇ C br ⁇ t ⁇ ( k , m ) ⁇ v br C ⁇ ( m ) + ⁇ n ⁇ M k C ⁇ C br ⁇ ( k , n ) ⁇ ⁇ ⁇ t ⁇ v br C ⁇ ( n ) - j br C ⁇ ( k ) ( 6.5 )
  • the sets M k R , M k L , M k Li , and M k G , M k C , M k Ct can be obtained for the initial topology and then updated for each new topology.
  • A(x, t) represents all terms that define the state self-dynamics.
  • the forcing term g(u, t) takes into account all external sources and interconnections with other networks. Since all inputs represented by u have the same units, the function g(u, t) will have the form of a summation. Finally, M(x, t) is the mass matrix.
  • N L with time-varying parameters is considered first.
  • the state equation for this case can be written as
  • the set of branch indices corresponding to the k-th inductive link-branch is denoted as L k L , which contains the k-th branch.
  • the set L k Le ⁇ L k L includes only those N L loop-branches that have non-zero external voltage sources.
  • L k LC contains the indices of the loop-branches that happen to be in the algebraic network
  • L k LC includes the loop-branches that are in N C .
  • L k L k L ⁇ L k LA ⁇ L k LC contains all loop-branches (complete basic loop) corresponding to the k-th inductive link-branch.
  • the reduced inductance and reduced resistance matrices in (6.9) are defined as a triple product of appropriate matrices. Using to the typical matrix multiplication process, the entries of these matrices are found as
  • (6.14) and (6.15) can be updated by running the summation indices m and n only over those branches that have time-varying parameters.
  • This set can be assembled by recording indices of the non-zero entries of the k-th column of B b L . Thereafter, it is possible to express the contribution of mutual inductance between the m-th and n-th branches in the reduced inductance matrix as follows
  • ⁇ ⁇ ⁇ L x ⁇ ( i , j ) ⁇ ( i ⁇ LP m ) ⁇ ⁇ ( j ⁇ LP n ) ⁇ B b L ⁇ ( i , m ) ⁇ B b L ⁇ ( j , n ) ⁇ L br ⁇ ( m , n ) ( 6.16 )
  • forcing vector can be expressed as
  • the k-th component of the forcing vector (6.19) is the sum of currents external to the capacitive network.
  • the non-zero entries of the KCL matrices A a C , D a CA , and D LC correspond to some cutset branches.
  • the set of cutset branches corresponding to the k-th capacitive tree branch is denoted as C k C , and the subset C k j ⁇ C k C includes the branches that have nonzero external current sources.
  • the sets C k CA and C k CL include the indices of the cutset branches that are in the algebraic and inductive networks, respectively.
  • C k C k C ⁇ C k CA ⁇ C k cl is the complete set of cutset branches corresponding to the k-th capacitive tree-branch. As before, all branch sets are assembled for each new network topology. Thereafter, the k-th component of the forcing vector can be computed with minimized effort as
  • the reduced parameter matrices can be computed using cutset participation sets.
  • the cutset numbering can be made the same as that for the rows of A a C .
  • the set CP k includes the cutset indices in which the k-th capacitive branch participates.
  • this set can be assembled by recording indices of the non-zero entries of the k-th column of A a C . Thereafter, the contribution of a mutual capacitance and/or mutual conductance between the m-th and n-th branches in their respective reduced matrices can be expressed as
  • (6.21)-(6.22) define contributions of the self-capacitance and self-conductance of the k-th branch of the capacitive network.
  • (6.21)-(6.22) the reduced parameter matrices in (6.18) can be computed and updated efficiently.
  • n is the number of branches in the given network
  • n s R ,n s L ,n s Lt and n m R ,n m L ,n m Lt are the total numbers of self and mutual resistive, inductive, and time-varying inductive parameters, respectively. The values of those numbers are readily determined from the branch sets M R , M L , and M Lt .
  • n denotes the number of inductive link branches or the total number of inductive loops
  • n i Le is the number of voltage sources in each i-th inductive loop.
  • n i LA and n i LC are the numbers of algebraic and capacitive network branches, respectively, that are also part of i-th inductive loop.
  • the building algorithm for assembling reduced parameter matrices consists of two parts: initialization and update.
  • the pseudo code implementing these procedures is given in Code Blocks 3-5. Since all reduced parameter matrices can be assembled using their respective loop and loop participation (cutset and cutset participation) branch sets, the building algorithm for the other reduced matrices can be obtained by appropriately modifying the pseudo code given therein. Therefore, only the reduced inductance matrix is discussed.
  • Code Block 3 can be used to assemble the reduced parameter matrices utilizing loop (cutset) sets as in (6.14). This code may be used to the initialize and update the parameter matrices. The initialization is performed by writing zeros in each memory slot, and then by computing the appropriate contribution due to the constant inductances into an auxiliary storage matrix L x . Pre-computing the contribution of all time-invariant inductances into a separate matrix suggests this approach
  • the reduced parameter matrices can be assembled as shown in Code Block 4, without the disadvantage of having to copy the entire matrix for each update.
  • the code in Code Block 4 avoids the scheme (6.26) altogether.
  • the code in Code Block 4 is used to update the reduced matrix, the original matrix will be destroyed which, in turn, makes it difficult to carry the contribution due to time-invariant parameters from one update to the next.
  • this function is accomplished by copying the entire matrix; whereas Code Block 4 updates only the relevant entries.
  • One way of using the algorithm in Code Block 4 for systems with both time-varying and time-invariant parameters is to “undo” the previous update before performing a new one.
  • This two-step update procedure is equivalent to updating the matrix once with the difference between the old and the new values of the variable parameters.
  • the pseudo code illustrating the update procedure is given in Code Block 5.
  • the reduced inductance matrix is updated due only to changes in variable inductances.
  • the implementation of this update algorithm requires auxiliary static memory for storing previous update values. This storage should be of the same size and type as the original reduced parameter matrix. It is also noted that the subtraction of the old and new inductances should be performed first as indicated by the parentheses so as to reduce the round-off errors due to finite precision machine arithmetic.
  • the complexity of the algorithms in Code Blocks 3-5 is a function of the size of the loop and loop participation sets. The sizes of these sets and their structures are, in turn, determined by the self parameters and the mutual coupling between network branches. Therefore, the complexity of algorithms Code Blocks 3-5 is highly system-dependent. Analyzing each algorithm for the worst case, which corresponds to all branches having variable parameters with all branches coupled with each other, results in similar complexity for all algorithms and reveals little about their actual performances with respect to practical networks. Instead, the problem of assembling the reduced parameter matrices using algorithms in Code Block 3 and Code Block 4 should be considered with respect to some typical cases. Expressions for the complexities can be significantly simplified using certain assumptions. Thus, for the purpose of derivation, it is assumed that there is no mutual coupling. Then, inspecting Code Block 3 for implementing (6.14) with this assumption in mind, the complexity can be expressed as
  • n is the number of inductive loops
  • m is the average number of branches in each loop.
  • m i the number of branches in the i-th loop
  • the complexity of the code in Code Block 4 is determined by the sizes of the loop participation sets LP.
  • the computational effort for the contribution from one branch is proportional to the square of the number of loops in which this branch participates.
  • the expression for the complexity for the code in Code Block 4 becomes
  • ODE solver 41 maintains state vector x, which is provided to inductive links current calculator 43 (for state variables relating to inductive elements) and capacitive trees voltage calculator 45 (the portions x C represent state variables relating to capacitive elements).
  • Inductive links current calculator 43 calculates the current i link L in the inductive link branches of the circuit, providing that information to resistive network algebraic equation calculator 47 and inductive network state/output equation evaluator 49 .
  • capacitive trees voltage calculator 45 calculates the voltages v tree C in capacitive tree branches as discussed above, and provides that information to resistive network algebraic equation component 47 and capacitive network state/output equation evaluator 51 .
  • Resistive network algebraic equation evaluator 47 uses i link L and v tree C with e br A and j br A to calculate i br A and v br A , which are provided to inductive network state/output equation component 49 , capacitive network state/output equation component 51 , event variable calculator 53 , and the system output.
  • Inductive network state/output equation component 49 uses i link L and v br A along with inputs e br L and j br L to determine i br L , v br L , (provided to event variable calculator 53 and the system output) and dx L /dt (provided to the ODE solver 41 ).
  • Capacitive network state/output equation component 51 uses i br A and v tree C along with e br C and j br C to calculate i br C , v br C , (provided to event variable calculator 53 and the system output) and dx C /dt (provided to the ODE solver 41 ).
  • the branch voltages and currents are output as vectors i br and v br . These values are used with u br by event variable calculator 53 to produce event variable z xnp , which is passed to the ODE solver 41 .
  • ODE solver 41 monitors for negative-to-positive zero crossings of Z xnp . If a zero crossing is encountered, which indicates that a switch or switches are opening or closing, the state selection algorithm (part of state model generator 31 in FIG. 3 ) if invoked to establish a new set of state variables and to update the branch sets and matrices used by the state equation building algorithm discussed above.
  • Inductive links current calculator 43 will now be discussed in relation to FIG. 6 .
  • Resistive network algebraic calculation block 47 will now be discussed in more detail in relation to FIG. 8 .
  • decision block 81 it is determined whether the resistive network parameters are constant or variable. If they are constant, the output current vector is calculated as shown in block 83 . If it is determined at decision block 81 that the resistive network parameters are variable, the current vector for the algebraic network branches is determined in block 85 by solving the equation shown therein. After the current vector is determined in either block 83 or block 85 , the voltage vector is determined at block 87 , as shown therein.
  • the inductive network state and algebraic equation determining block 49 is shown in more detail and will now be discussed in relation to FIG. 9 .
  • the inductive network branch currents i br L and forcing term g L are computed.
  • decision block 91 it is determined whether the inductive network contains constant or variable parameters. If the parameters are constant, then the type of state variables being used is identified at decision block 92 . For currents, the derivatives with respect to time of the inductive portion of the state vector and the currents in the inductive link branches are calculated. Similarly, for state variables defined as fluxes, those values are calculated at block 94 . In either event (from either block 93 or block 94 ), the time-derivative of inductive branch currents and the voltages for inductive branches are determined at block 95 , and the outputs are generated.
  • variable parameters are present, then it is determined at decision block 96 whether currents or fluxes have been selected as state variables for the inductive portion of the network. If currents have been selected, then the time-derivatives of X L and the inductive link-branch currents are determined at block 97 . Correspondingly, if fluxes are used as state variables, then those values are determined at block 98 . In either event (from either block 97 or block 98 ), the time-derivative of inductive branch currents, as well as the inductive branch voltages, are determined at block 99 , and the output of overall block 49 is generated.
  • the capacitive network state in algebraic equation component 51 will now be discussed in relation to FIG. 10 .
  • values are calculated for the voltages v br C and forcing term g C for the capacitive network.
  • decision block 101 it is determined whether constant or variable parameters are present. If constant parameters are being used, then it is determined at decision block 102 , whether voltages or charges are used as state variables for the capacitive network. In the former case, the time-derivative of the state variables relating to the capacitive network and the voltages for the capacitive tree branches are calculated at block 103 . Likewise, if charges are being used as state variables, those values are calculated at block 104 . In either event (whether from block 103 or block 104 ), the system calculates the capacitive branch currents and the time-derivative of the capacitive branch voltages at block 105 , and the output values are generated.
  • the type of state variables (voltages or charges) is determined at decision block 106 . If voltages are being used, then the time-derivatives of the state variables relating to capacitive branches, as well as the voltages relating to capacitive tree branches, are determined at block 107 . If charges are being used for state variables, then the equations of block 108 are solved. In either event (whether from block 107 or block 108 ), the capacitive branch currents and the time-derivative of the capacitive branch voltages are determined at block 109 , and the output of overall block 51 is generated.
  • the ASMG has a single switch branch
  • different logic may be specified by the user to determine when a given switch is opened or closed.
  • switch types each implementing specific switching logic, have been considered in an exemplary embodiment of the invention. These switch types were selected to represent many common solid-state switching devices such as diodes, thyristors, transistors (MOSFET, BJT, IGBT, for example), triacs, and the like.
  • the built-in switching logic does not permit the opening of switches that would cause discontinuities of currents in inductors and/or current sources, as well as closing of switches that would cause discontinuities of capacitor voltages and/or voltage sources.
  • the switch can be opened or closed at any instant of time by controlling variable u br , subject to KCL, KVL, and energy conservation principles.
  • LBS Latched Bidirectional Switch
  • This logic can be used to represent AC arcing switch or an ideal solid-state triac.
  • This switch can be used to model an ideal thyristor.
  • These four switch types can be advantageously integrated into the circuit simulation system routines. For example, switching analysis and topology evaluation for state selection can be optimized using the additional information inherent in each switch type, as will occur to those skilled in the art.
  • the processes might be carried out on a single processor or be distributed among multiple processors.
  • time-domain steps in the simulation might be the same throughout the system, or might vary between variables, for a single variable (over time), between portions of the circuit being simulated, or other divisions as would occur to one skilled in the art.
  • numerical techniques used to perform integration and/or differentiation e.g., trapezoidal, NDF, or Adams techniques
  • the rates and maximum error parameters can also vary and might be consistent across the system, vary among portions of the circuit, change over time, or otherwise be applied as would occur to one skilled in the art.
  • Various embodiments of the present invention will also use different techniques to revise the state equations stored for each topology.
  • the data structure(s) that describe the state equations before a topology change event are modified only as much as necessary to take into account the new topology (i.e., only the changed portions of the circuit).
  • new state equations may be derived in whole or in part for one or more topologies.
  • a cache is maintained of state equations for some or all of the topologies that are encountered.
  • the parameters of the branches in the system can also be updated during the simulation, using a variety of strategies.
  • a data structure reflecting the constant parameters is maintained.
  • the constants are copied into a new data structure, and the variable parameters are added to them.
  • the parameters for the circuit at a given time t i are stored in a data structure.
  • the variable parameters from time t i are subtracted from the values in the data structure, then updated for time t i+1 , and the new values are added to the values in the data structure.
  • multiple models can be used for a single physical component or sub-circuit. For example, a detailed, computationally intensive model might be used for a component when it has a rapidly varying input. Then, when the input has settled to a slower-varying state, a simpler, less computationally intensive model may be substituted for the complex one.
  • data structures used to represent information in various embodiments of the present invention vary widely as well.
  • the stated structures may be optimized for programming simplicity, code size, storage demands, computational efficiency, cross-platform transferability, or other considerations as would occur to one skilled in the art.
  • Partitioning of branch sets as discussed herein may employ a wide variety of algorithms as would occur to one skilled in the art.
  • Some spanning tree algorithms that are well adapted for use with the present invention are presented in T. H. Cormen, C. E. Leiserson, R. L. Rivest, Introduction to Algorithms , MIT Press, McGraw Hill, 1993; and R. E. Tarjan, Data Structures and Network Algorithms , Bell Laboratories, Murray Hill, 1983.
  • the implementation of partitioning algorithms to the data structures involved will likely be a consideration for each particular implementation of the present invention.

Abstract

A system, method, and apparatus select state variables for, build state equations of, and simulate time-domain operation of an electronic circuit. The circuit is modeled with three branch types (inductor, resistor, voltage source in series; capacitor, resistor, current source in parallel; and switch), including four pre-defined switch types (unidirectional unlatched, bidirectional unlatched, unidirectional latched, and bidirectional latched). Automated analyses determine efficient state variables based on the currently active circuit topology, and state equations are built and applied. Switching logic determines when switch states change, and state equations for the new topology are either drawn from a cache (if the topology has already been processed) or derived anew. The switch control signals may be combined into a single switching variable, defined as a function of the state output.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 10/043,981 filed Jan. 11, 2002, which claims the benefit of U.S. Provisional Patent Application 60/261,033, filed Jan. 11, 2001, both of which are incorporated herein by reference.
  • STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • This invention was made with Government support under Contract F33615-99-C-2911 awarded by the U.S. Department of Defense. The United States Government has certain rights in the invention.
  • BACKGROUND
  • The present invention relates to the design, modeling, simulation, and emulation of electronic circuitry. More specifically, the present invention relates to numerical time-domain simulation of analog or digital electrical circuits using mathematical expressions. Present simulation systems suffer from limitations in the kinds and topologies of circuits to which they may be applied. The complexity of systems to be simulated is also limited in current systems by various inefficiencies in the simulation modeling process, including state selection and state equation building methods. There is thus a need for further contributions and improvements to circuit simulation technology.
  • Existing techniques for circuit simulation include two approaches. In one, the underlying numerical algorithms are based upon a state-space model of the system being simulated or analyzed. The system to be simulated is specified either in the form of a text file using a well-defined syntax, or graphically using boxes or icons to represent computational elements such as summers, multipliers, integrators, function generators, and/or transfer functions, for example. General purpose mathematical packages operate the models by assembling a state-space model of the system from the user-supplied description. The time-domain response is then calculated numerically using established integration algorithms.
  • In the other category of solutions, the system is described as an electrical circuit, the fundamental branches of which may include resistors, inductors, capacitors, voltage sources, and/or current sources, for example. Other circuit elements such as diodes, transistors, or electromechanical devices may also be defined as a variation or a combination (sub-circuit) of the fundamental branches. In this type of simulation system, the model developer describes the circuit in the form of a list of parameters of the fundamental or user-defined circuit elements and the layout of the circuit. Nodal or modified-nodal analysis techniques are then employed to simulate operation of the circuit. In that process, the differential equations associated with each inductive and capacitive branch are modeled in discrete form using difference equations to relate the branch current and voltage at a given instant of time to the branch current and/or voltage at one or more preceding instants of time. The difference equations for the overall circuit are then assembled automatically and solved using established methods to simulate the time-domain response.
  • It is of note that the second category of systems does not force the model developer to derive the state equations or block-diagram representation of the circuit. On the other hand, a state-space model of the overall circuit is never established or calculated by the program. Consequently, the numerous (and generally more efficient) techniques for analysis of state-space models cannot be applied in such systems.
  • More recently, a system has been developed that produces a state-space realization for the linear part of a system, the non-linear portion and variable parameters being represented as separate or external blocks. The overall system is then simulated using the appropriate solvers in the host mathematical package. In these systems, however, switches are modeled as resistors with very small or very large values. The resulting state model, therefore, may be artificially stiff and have larger dimension than necessary because of the states artificially introduced by the resistor-based switch model. In addition, the system does not incorporate mutual, non-linear, or variable inductances, capacitances, and resistances in its standard library blocks and components.
  • Some work on an automated state model generator and circuit simulator (ASMG) is reported in O. Wasynczuk and S. D. Sudoff, “Automated State Model Generation Algorithm for Power Circuits and Systems,” IEEE Transactions on Power Systems, Vol. 11, No. 9, November, 1996, pp. 1951-1956. In this work, circuits are specified and analyzed using fundamental branches as shown in FIG. 1. Each fundamental branch includes a switch, resistor rbr, inductor Lbr, voltage source ebr, conductance gbr, capacitor Pbr, and current source jbr. Each parameter can be fixed or time-varying, and ideal components can be modeled by setting the remaining parameters to zero. Given the parameters for each branch and the list of nodes that the branches connect, the ASMG generates a state-space model of the overall circuit. The state-space representation is calculated and solved using numerical methods. If a change in switching state occurs, the state model generator then recalculates the state-space model and establishes the appropriate initial conditions for the new topology. A disadvantage of this system is that it cannot be used to simulate circuits that include loops composed of voltage sources, capacitors, and/or resistors. This limitation dramatically hindered the ability of the ASMG to simulate high-frequency switching transients of power-electronic-based systems.
  • Subsequent improvements to the ASMG is the fundamental branches illustrated in FIG. 2 using the same notation for parameters as was used in FIG. 1. Using this form of modeling, more complicated circuit elements could be represented, including transistors, diodes, and thyristors. These improvements are discussed in J. Jatskevich, “A State Selection Algorithm for the Automated State Model Generator,” Ph.D. Thesis, Purdue University, 1999. Additional improvements are described herein.
  • As used herein, a “spanning tree” over a graph (comprising a set of branches and a set of nodes to which the branches are connected) is defined to be a subset of the set of branches such that at least one branch in the subset is connected to each node from the set of nodes, yet no loop can be formed from the branches in the subset.
  • Also, as used herein, a “topology change event” occurs when the control signal for one or more switching elements causes the switching state of that element to change. The nodes to which that element is connected will then become (or cease to be) galvanically connected to another portion of the circuit. In most cases, this change affects the state equations for the overall circuit.
  • SUMMARY
  • It is an object of the present invention to provide an improved system, method, and apparatus for simulating electrical and electronic circuits. Another object is to provide a system, method, and apparatus to more efficiently simulate the operation of a wider variety of electrical and electronic circuits than has previously been possible.
  • These objects and others are achieved by various forms of the present invention. One form of the present invention is a method, including creating one or more data structures sufficient to model an electronic circuit as a collection of n (at least two) elements. These comprise zero or more LRV elements, zero or more CRI elements, and zero or more switching elements. The LRV elements each have at least one of (a) a non-zero inductance parameter Lbr, (b) a non-zero resistance parameter rbr, and (c) a non-zero voltage source parameter ebr, but neither a non-zero capacitance parameter, nor a non-zero current source parameter, nor a switch parameter. The CRI elements each have at least one of (a) a non-zero capacitance parameter Cbr, (b) a non-zero resistance parameter rbr, or (c) a non-zero current source parameter jbr, but neither a non-zero inductance parameter, nor a non-zero voltage source parameter, nor a switch parameter. The switching elements each have a switch state and neither a non-zero inductance parameter, a non-zero capacitance parameter, a non-zero resistance parameter, a non-zero voltage source parameter, nor a non-zero current source parameter. A first set of state equations is automatically generated from the one or more data structures, and operation of the electronic circuit is simulated by application of said first set of state equations. In this method, the collection comprises either (1) an LRV element for which at least two of Lbr, rbr, or ebr are non-zero, or (2) a CRI element for which at least two of Cbr, rbr, or jbr are non-zero.
  • In variations of this form of the invention, the simulating step includes producing state output data. In some embodiments, some or all of the parameters in the first set of state equations change over (simulation) time as a function of the state output data. In some embodiments, some or all of the parameters change over (simulation) time due to a time-varying parameter of at least one element in the collection.
  • In other variations of this form of the invention, a second set of state equations is generated from the one or more data structures upon the occurrence of a first topology change event. In some such embodiments, the generating step simply involves modifying only the subset of said first set of state equations that depends on the one or more switching elements that have changed. In other such embodiments, each unique vector of switch states represents a topology of the overall circuit, and the method also includes (1) storing the first set of state equations in a cache; (2) after a second topology change event, determining whether a set of state equations in the cache represents the new topology; and (3a) if the determining step is answered in the affirmative, using the set of state equations that represents the new topology to simulate operation of the circuit after the second topology change event; or (3b) if the determining step is answered in the negative, building a third set of state equations that represents the new topology, and using the third set of state equations to simulate operation of the circuit after the second topology change event.
  • In other such embodiments, the method also includes (1) storing said second set of state equations in a cache (2) after a third topology change event, deciding whether a set of state equations in the cache represents the new topology; and (3a) if the deciding step is concluded in the affirmative, using the set of state equations from the cache that represents the new topology to simulate operation of the circuit after the third topology change event; or (3b) if the deciding step is concluded in the negative, building a new set of state equations that represents the new topology, and using the new set of state equations to simulate operation of the circuit after the third topology change event.
  • Another form of the present invention is a method including creating one or more data structures that together store characteristics of some active branches Bactive that make up a graph of nodes and branches that form a circuit, wherein Bactive consists of (i) a (possibly empty) set BL of inductive branches, each having a non-zero inductive component but neither a capacitive component nor a variable switch state; (ii) a (possibly empty) set BC of zero or more capacitive branches, each having a non-zero capacitive component but neither an inductive component nor a variable switch state; and (iii) a (possibly empty) set BA of additional branches, each having neither an inductive component nor a capacitive component. Bactive is partitioned into a first branch set Btree active and a second branch set Blink active, where the branches in Btree active form a spanning tree over Bactive, giving priority in said partitioning to branches not in BL over branches in BL. Blink active is sub-partitioned into a third branch set Blink L, and a fourth branch set Blink CA, where Blink L=Blink active∩BL. A fifth branch set BCA is identified as the union of (i) Blink CA, (ii) BC∩Btree active, and (iii) those branches in Btree active that form a closed graph when combined with Blink CA. BCA is partitioned into a sixth branch set {tilde over (B)}tree CA and a seventh branch set {tilde over (B)}link CA, where the branches in {tilde over (B)}tree CA form a spanning tree over BCA, giving priority in said partitioning to branches in BC over branches not in BC. An eighth branch set Btree C={tilde over (B)}tree CA∩BC is identified. A set of state variables is selected, comprising (a) for each branch of Blink L, either the inductor current or inductor flux, and (b) for each branch of {tilde over (B)}tree C, either the capacitor voltage or capacitor charge. A plurality of states of the circuit are simulated using the set of state variables.
  • In a variation on this form of the invention, the partitioning steps each comprise an application of a weighted spanning tree algorithm, such as, for some positive numbers wL and wC, (a) for the partitioning of Bactive, a minimum spanning tree algorithm is used with weight function
  • ω L ( b j ) = { w L if branch b j B L 0 otherwise ;
  • and (b) for the partitioning of BCA, a maximum spanning tree algorithm is used with weight function
  • ω C ( b j ) = { w C if branch b j B C 0 otherwise .
  • Another form of the invention is a system, including a processor and a computer-readable medium in communication with the processor, where the medium contains programming instructions executable by the processor to:
  • (1) build state equations for a first topology of an electronic circuit that has at least two switching elements, where each switching element has a switching state;
    (2) solve the state equations at some time ti to provide a state output vector, in which at least two elements control the switching states of the switching elements;
    (3) calculate the value of a switching variable as a function of the state output vector, wherein the value reflects whether the switching state of at least one of the switching elements is changing; and (4) if the value of the switching variable at time ti indicates that at least one of the switching elements is changing, determine a second topology of the electronic circuit for time ti + and obtain state equations for the second topology.
  • In a variation of this form of the invention, the programming instructions include a state equation building module, a solver module for ordinary differential equations, and a switching logic module. The building is performed by the state equation building module, the solving and calculating are performed by the solver module; and the determining is performed by the switching logic module. In some embodiments, the obtaining is performed by said switching logic module, while in others the obtaining is performed by said state equation building module.
  • In still other embodiments, at some time tj, at least two switching elements are each either rising-sensitive or falling-sensitive switches. Rising-sensitive switches change switching state if and only if a controlling element of the state vector has passed from a negative value to a non-negative value. Conversely, falling-sensitive switches change switching state if and only if a controlling element of the state vector has passed from a positive value to a non-positive value. In these embodiments, the function is the arithmetic maximum of (1) a maximum of all elements of the state vector that control rising-sensitive switches, and (2) the negative of the minimum of all controlling elements of the state vector that control falling-sensitive switches.
  • A still further form of the invention is a system for simulating electronic circuits having a processor and a computer-readable medium in communication with said processor, where the medium contains programming instructions executable by the processor to read element parameters and node connection information from a data stream. The stream includes at least one switch type specification selected from the group consisting of: a unidirectional, unlatched switch; a bidirectional, unlatched switch; a unidirectional, latched switch; and a bidirectional, latched switch. The instructions are further executable by the processor automatically to calculate state equations for the circuit given the states of switches specified by the at least one switch type specification.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram of a generic branch for modeling a circuit using a prior art system.
  • FIG. 2 is a diagram of two branches used to model circuits in another prior art system.
  • FIG. 3 is a block diagram of the overall process for simulating a circuit according to the present invention.
  • FIGS. 4A-4C are branches used to model circuit components using the present invention.
  • FIG. 5 is a block diagram of the run-time computational routines in one embodiment of the invention.
  • FIG. 6 is a block diagram detailing the inductive link current calculator for use with the routines shown in FIG. 5.
  • FIG. 7 is a block diagram detailing the capacitive tree voltage calculator for use with the routines shown in FIG. 5.
  • FIG. 8 is a block diagram detailing the resistive network algebraic equation calculator for use with the routines shown in FIG. 5.
  • FIG. 9 is a block diagram detailing the inductive network state/output equation calculator for use with the routines shown in FIG. 5.
  • FIG. 10 is a block diagram detailing the capacitive network state/output equation calculator for use with the routines shown in FIG. 5.
  • FIG. 11 is a schematic diagram of a switch element.
  • FIG. 12 is a block diagram of the interaction between subnetworks as analyzed for use with the present invention.
  • DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
  • For the purpose of promoting an understanding of the principles of the present invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will, nevertheless, be understood that no limitation of the scope of the invention is thereby intended; any alterations and further modifications of the described or illustrated embodiments, and any further applications of the principles of the invention as illustrated therein are contemplated as would normally occur to one skilled in the art to which the invention relates.
  • Generally, the system illustrated in FIGS. 3-10 and described herein simulates operation of a circuit in the time domain by collecting component parameters (variable and/or constant) and the overall circuit topology, then establishing a minimal state space for each active topology (as they are encountered), building state equations for that state space, and solving those equations for relevant steps in time. As discussed herein, the above process can be very much streamlined using the additional knowledge and techniques provided in the present invention.
  • The overall simulation process will now be discussed with reference to FIG. 3. A description 21 of the circuit, including an identification of constant and variable parameters, initial conditions, and the like are fed into state model generator 31, which provides state model 23 to solver 33. Solver 33 generates the simulation out put 25 for data consumers (such as log files, graphical visualization tools, and the like) as are known in the art. Solver 33 also provides continuous state information 27 to switching logic 35, which determines the state of one or more switches in the circuit should be changed. The result of this analysis, switching state 29, is provided to state model generator 31. If topological changes to the circuit are indicated in switching state 29, state model generator 31 updates the state model and passes that updated model to solver 33 for continued simulation.
  • Finite Electrical Networks
  • For the purposes of automated modeling, it is assumed that electrical networks can be composed of inductive, capacitive, and switch branch models depicted in FIGS. 4( a), (b), and (c), respectively. Using this approach, a wide variety of electrical circuits with different topologies can be modeled by appropriately setting the branch parameters. Such a modeling approach also assumes that only a finite number of branches is allowed for representation of any particular circuit, and that all branches have lumped parameters. Electrical networks satisfying this assumption can be modeled using a finite-dimensional state variable approach. Thereafter, it is possible to derive a finite-dimensional system of ordinary differential equations (ODEs) and algebraic equations (AEs) that would portray the dynamic behavior of currents and voltages for any such circuit. It is also important to note that only “practical” or “reasonable” circuits are considered, which means finite energy, finite current, and finite voltage electrical networks. For such networks, all energy storage elements such as capacitors and inductors are allowed to store only a finite amount of energy at any instant of time and that energy cannot change forms instantaneously. Also, no branches are allowed to carry infinite current through their components or have infinite voltage between nodes throughout the existence of the network including the instant of time that the network is switching between topological stages. These restrictions ensure a proper commutation (or proper transition) of the network from one topology to another. Therefore, a class of electrical networks that permits only proper commutation and posses a finite dimensional system of differential algebraic equations (DAEs) can be defined as a class of finite networks. Such a class of finite electrical networks is denoted as Nq α+β where α and β are the number of state variables in the systems of ODEs for the inductive and capacitive networks, respectively, and q is the number of branches used to represent the layout of the network. Also, α and β are often called the network complexity. Thus, hereafter only proper and finite electrical networks from the class Na α+β are considered.
  • Representation of Electrical Networks
  • This section will define certain terms with respect to electrical networks from the class Na α+β. In particular, a network N can be defined by its topological configuration, which is best described by the associated graph denoted as G, and a set containing branch parameters denoted as P. Also, a particular switch branch may be identified as active or inactive depending on its state. Therefore, in order to specify N completely, a topological vector s, which would contain information regarding whether each branch is active or inactive, should be added. A topological state vector s would have ones in places corresponding to all currently active branches, and zeros for the remaining branches which were identified as inactive for the current topology. Thus a network of a general kind is an object from the class Na α+β and is defined as a triple

  • N=(G,P,s)  (2.1)
  • It is sometimes convenient not having to deal with inactive branches and assume that all of the network branches are currently active. Such an assumption would significantly simplify notation without a significant loss of generality. Often, a network with all of its branches assumed to be active may be referred to augmented or having augmented topology. In these cases, the vector s would not carry any additional information and, therefore, can be omitted from (2.1).
  • For the associated graph G to be a circuit, it is required that any single branch of the overall network N must be included in some loop with other branches so as to ensure the existence of a closed path for the current flow. Graphs with such a circuit property are referred to as being closed. For consistency, a reference direction (from − to +) is assigned to each branch with respect to the two nodes at its endpoints. Also, a network N, in the most general case, may be composed of several galvanically independent circuits. Therefore, a directed multi-graph denoted by Gd g can be associated with network N. Such a multi-graph may consist of g closed subgraphs. That is

  • Gd g={Gd 1, Gd 2, . . . Gd k, . . . Gd g}  (2.2)
  • where each k-th closed subgraph Gd k corresponds to its circuit. The subscript d indicates that the graph is directed. Even thought all such subgraphs are electrically disjointed they may be coupled through mutual inductances and/or capacitances between branches. On the other hand, if g=1, the associated network becomes a single circuit. Sometimes, for the purposes of analysis, it is also necessary to consider the associated undirected graph or subgraph, which can be denoted at Gg and Gk, and obtained by omitting the information regarding the reference direction of each branch. In a closed undirected graph, any branch must be a member of some cycle with more than one branch.
  • The directed multi-graph Gd g consists of a total of q branches from the branch set

  • B={b1, b2, . . . bj, . . . , bq}  (2.3)
  • which, in turn, are connected in some arrangement via p nodes from the node set

  • N={n1, n2, . . . , ni, . . . np}  (2.4)
  • Branches and nodes appear in (2.3) and (2.4) in a definite order which is given by their respective subscript indices. Using such a representation, it is possible to retrieve a particular branch or a node by referring to the respective ordered set, B or N, with an appropriate index-pointer.
  • Therefore, the graph is defined in terms of the branch and node sets as Gd g=(N,B). In general, the network N has a changing topology. At one instant of time, some branches may be inactive and some nodes may be unused. At another instant of time, other branches may switch on and off including different nodes, thus defining a new topology. Whenever it is needed, the number of currently active branches may be denoted as q′ and the number of currently used nodes as p′; whereupon the number of inactive branches and nodes may be expressed as q-q′, and p-p′, respectively. Often, when it is clear what the node and branch sets are, only their dimensions may be specified instead of actual sets. Also, a spanning tree Gtree k can be associated with each k-th closed subgraph Gk, and a corresponding forest of such trees Gtree g=(N,By) can be associated with the global graph Gg=(N,B). Note that the set of forest branches By is the subset of the global branch set, that is By⊂B.
  • There are several ways to represent a graph on a computer. Some methods may take advantage of the sparsity of the interconnection matrix and are, therefore more efficient than others in terms of the memory required to store a particular graph. A less efficient but algebraically convenient method is to represent a graph in matrix form. In particular, a node incidence matrix Af for the multi-graph has p rows and q columns (one row for each node and one column for each branch, all ordered). Even though this matrix never has full rank, it can be referred to as a full node incidence matrix meaning that it is not reduced to the size of its rank. This matrix is conveniently formed from positive and negative node incidence matrices as

  • A f =A + −A   (2.5)
  • where the entries of A+ are A+(i, j)=1 if the j-th branch includes the i-th positive node, and zero otherwise. Similarly, the only non-zero entries of A are A(i, j)=1 if the j-th branch includes the i-th negative node. For the networks with changing topology, this matrix can be updated for each topology such that for each currently inactive branch bj the corresponding j-th column of Af is replaced with zeros. The transpose of Af is also known as the adjacency matrix of a graph. For large graphs, these matrices are quite sparse and, therefore, may be stored using techniques optimized for sparse matrices.
  • It is understood in the art that each connected subgraph of the network N contributes exactly one zero-row to ARREF. Thus, the rank of Af is always the number of active nodes less the number of connected subgraphs, which can be written as

  • rank(A f)=rank(A RREF)=p′−g=r  (2.8)
  • It can be observed that the total number of loops in the undirected graph Gg=(N,B) associated with N may be computed as the number of active branches less the rank of Af, which may be written as

  • s=q′−rank(A f)=q′−r  (2.9)
  • Using the branch models in FIG. 4, the parameters of the circuit can be conveniently arranged into parameter matrices denoted as Rbr, Gbr, Lbr, and Cbr. In one embodiment, these square matrices would have dimensions equal to the total number of branches in the global network to be modeled, and would contain the parameters of each branch in the corresponding diagonal entry of each matrix. If some branches are coupled through mutual inductances or mutual capacitances, then those mutual parameters are represented in the off-diagonal entries of Lbr or Cbr corresponding to these branches. In the same way, mutual resistances or mutual conductances can be modeled between branches. Even though such quantities might not have physical meaning similar to mutual inductances or mutual capacitances, they can be employed for simulation purposes. If some branch parameters are not present in the circuit, the corresponding entries in the parameter matrices are filled with zeros. Similar to parameter matrices, the external independent voltage and current sources are also represented as vectors denoted as ebr and jbr. These vectors would have voltages and currents in those entries corresponding to the branches with respective sources and zeros elsewhere.
  • In some circuits to be simulated, the network parameters will not depend on the currents and voltages applied to the branches. In other circuits the parameters will vary with time, so the parameter matrices would also become matrix functions of time Rbr(t), Gbr(t), Lbr(t), and Cbr(t). For simplicity of notation herein, such time dependence will not be written explicitly. Also, since inductors and capacitors are energy storage components, in addition to their time dependence, their total derivatives with respect to time must also be known. Thus, the derivatives of time-varying inductances and capacitances are additional inputs into the model of the energy transformation and exchange processes in the circuit.
  • The network parameter matrices and the vectors of voltage and current sources can be grouped to form a parameter set for the global network, which can be defined as
  • P = { R br , L br , L br t , e br , G br , C br , C br t , j br } ( 2.10 )
  • It is also convenient to define the following subsets of P
  • P L = { R br , L br , L br t , e br } ( 2.11 ) P C = { G br , C br , C br t , j br } ( 2.12 ) P A = { R br , G br , e br , j br } Clearly , ( 2.13 ) P = P L P C P A ( 2.14 )
  • Subdivision of the parameter set P corresponding to the global network N into subsets (2.11), (2.12), and (2.13) has the following goal. First, consider a grid of branches of either of the two types such that all parameters can be placed in one set PA. A network of this kind does not have energy storage element such as inductors or capacitors. Modeling such a network would require solving a system of AEs relating branch voltages and currents to external sources through the network parameters and topology. From the nature of the equation that needs to be solved in order to obtain all branch currents and voltages, such networks are referred to as being algebraic and denoted as NA. On the other hand, networks whose parameters are placed in the set PL have inductive elements which may store energy in the magnetic field. Therefore, a network of this type possesses a system of ODEs, or more precisely a state equation, whose natural state variable may be inductor currents or flux linkages. Similarly, networks whose parameters are placed in PC have capacitors that may store energy in an electric field, and therefore, posses ODEs whose natural state variables may be capacitor voltages or charges. These two networks are referred to as inductive and capacitive, respectively.
  • Thus, three types of elementary electrical networks, namely inductive, capacitive, and algebraic, denoted as NL, NC, and NA, respectively, are considered. There, only the first two networks are allowed to have energy storage circuit components, and therefore, only these two networks can have state variables. In the most general case, an electrical system may be composed of all types of branches considered above. Therefore, a corresponding network N can be represented as an interconnection of NL, NC, and NA, which is symbolically expressed in terms of union as

  • N=NL∪NC∪NA  (2.15)
  • Depending on a particular circuit to be modeled, some networks may or may not be present in (2.15).
  • Topological Canonical Form
  • Models of electrical networks must obey all laws of circuit theory, and in particular, Kirchhoff's current and voltage laws, KCL and KVL. Automated modeling requires formulation of equations describing the network using KCL and KVL that are written algorithmically for the entire circuit or its sections. Moreover, KCL and KVL may be written for a network of the most general kind based only on its topological configuration and regardless of the actual branch elements of their volt-ampere characteristics. Therefore, the starting point in the analysis of graph Gd g=(N,B) is the associated full node incidence matrix Af.
  • It is necessary to express KCL and KVL for the network whose topology is given by Gd g=(N,B) and stored in Af. For simplicity of notation, all branches in this section are assumed to be active. The procedure continues as follows. First, instead of Gd g=(N,B), the corresponding undirected version Gg=(N,B) is considered. Then, suppose that from Gg=(N,B), it is possible to find a forest of spanning trees Gtrees g=(N,By) that spans the entire node set N using the set By, which is some subset of B of minimal size over all branch subsets. In general, Gtrees g=(N,By) is far from being unique for a given Gg=(N,B). In fact, it can be shown that a graph whose full node incidence matrix is Af has det(AfAf T) total number of spanning trees. Moreover, the partitioning of a multi-graph into spanning trees Gtrees g=(N,By) and remaining link-branches is a nondeterministic topological problem. One way to simplify this task is to associate appropriate weights with the branches and convert the problem into finding a spanning tree with the minimized/maximized sum of such weights. This approach is known in network optimization as the minimum/maximum spanning tree problem. There are several well-known minimum/maximum spanning tree algorithms which roughly yield the order of complexity O(q log p) and better depending on the data structure used to represent a graph. Also, the branch weights should be relatively simple so as to promote good performance as well as to be able to prove certain useful properties. Thus, the weights can be assigned to network branches based on their respective parameters (2.10). This method of obtaining a spanning tree and a set of links, with some desired property, will be utilized in the present, exemplary embodiment for the purpose of automated modeling.
  • For the purpose of this section, any proper forest of spanning trees would suffice. Having performed such graph partitioning, the set of tree-branches By is identified. Thereafter, the set of remaining link-branches Bx can be determined as

  • B x =B−B y  (2.16)
  • Since the subsets By and Bx are identified based on Gtrees g=(N,By), it is now also possible to re-sort the complete branch set B such that all tree-branches appear first from the left and all the remaining link-branches on the right. That is, the branch set B can be re-ordered as

  • B={By,Bx}  (2.17)
  • The new branch order defined in (2.17) can be applied to the columns of Af. In this case, the new branch order can be related to the original branch order through a permutation matrix which is denoted as Tp. Multiplying Af from the right by Tp results in a matrix whose columns are ordered to correspond to the branch set B in (2.17). That is, the permutation matrix Tp should sort the columns of Af such that

  • AfTp=[Atrees,Alinks]=Âf  (2.18)
  • This permutation matrix can be assembled from an identity matrix by sorting its columns at the same time as the branches in (2.17). Note that the multiplication from the right by Tp T performs the reverse column permutation and restores the original column order. That is

  • AffTp T  (2.19)
  • Here, as well as further on, the hat sign above the matrix denotes that the corresponding matrix or vector quantity is referred to a branch order different that the original order.
  • After the full node incidence matrix is expressed in the form (2.18), its algebraic properties can be utilized. In particular, since the columns of Atrees correspond only to branches that form a free-of-cycles forest, this matrix must have a full rank. On the other hand, Alinks contains only columns corresponding to the link-branches, and therefore, none of its columns add to the rank of Ãf. Therefore, the rank of Ãf, as well as the rank of Af, is determined by the columns of Atrees. Therefore, these columns can be chosen to be the basis columns of Ãf or Af. With respect to (2.18), the basis columns appear first from left-to-right. The RREF of (2.18), therefore, will have the identity matrix of the size of its rank as the upper-left block, and its re-ordered structure becomes
  • RREF ( A ^ f ) = [ I rxr A ^ rxs 0 gxr 0 gxs ] ( 2.20 )
  • The zero rows on the bottom of (2.20) do not include any useful topological or algebraic information about the network and therefore can be deleted. We define the topological canonical form (TCF) of the node incidence matrix to be the result of the reduction of (2.20):

  • A TCF=TCF(A f)=└I rxr  rxs┘  (2.21)
  • The TCF is one of the key concepts used herein to describe networks. From ATCF, the reduced incidence matrix Âa and the so called basic loop matrix {circumflex over (B)}b are obtained as

  • Âa=ATCF  (2.22)

  • {circumflex over (B)} b =└−Â sxr T I sxs┘  (2.23)
  • Matrices (2.22) and (2.23) were assembled with respect to the branch order (2.17) and can be easily recovered for the original branch order as

  • AaaTp T  (2.24)

  • Bb={circumflex over (B)}bTp T  (2.25)
  • Denoting vectors of branch currents and voltages as ibr and vbr, respectively, KCL and KVL written for the whole network have the usual form

  • Aaibr=0  (2.26)

  • Bbvbr=0  (2.27)
  • Also, instead of using recovered matrices (2.24)-(2.25), it is possible to re-sort ibr and vbr into îbr and {circumflex over (v)}br for the new branch order (2.17) using the same permutation matrix Tp, and then use these with (2.22)-(2.23) to form KCL and KVL similar to (2.26)-(2.27). That is

  • ÂaibrTpaîbr=0  (2.28)

  • ÂavbrTp={circumflex over (B)}b{circumflex over (v)}br=0  (2.29)
  • Furthermore, based on the structure of TCF, an interesting property of matrices (2.22) and (2.23) can be utilized. This property relates voltages and currents of the branches corresponding to subsets By and Bx in (2.17). In particular, based on the branch order (2.17) it is possible to define the vectors of tree and link currents and voltages as

  • îbr=[iy,ix]  (2.30)

  • {circumflex over (v)}br=└vy, vx┘  (2.31)
  • Then, based on (2.22), (2.23), and (2.30), the vector of re-sorted branch currents can again be expressed in terms of the link currents as

  • î br=({circumflex over (B)} b)T i x  (2.32)
  • Similarly, the vector of re-sorted branch voltages can be expressed in terms of the tree voltages as

  • {circumflex over (v)} br(Â a)T v y  (2.33)
  • The relations (2.28)-(2.33) play an important role in formulating the governing DAEs describing electrical networks.
    Networks with State Variables
  • In this section, a class of finite electrical networks Nq α+β is considered. In particular, the two types of networks that can be modeled using a state variable approach are inductive and capacitive. Equipped with techniques based on a topological search for an appropriate forest of spanning trees, the conditions under which a corresponding state equation can be assembled are also considered.
  • Inductive Network State Equation Formulation
  • An inductive network can be built using branches of the type depicted in FIG. 4( a). Assuming that all switches are active, a network of this type is given as (2.80)

  • N L=(G,P L)  (2.80)
  • where the parameter set PL is defined in (2.11).
  • For a given topology of NL, the branches must be re-sorted into subsets as in (2.17). The subset By would collect branches that form spanning trees for all subgraphs. Each such tree is free of cycles, and covers all active nodes in its subgraph. The second category, denoted as subset Bx, takes the link-branches. These branches are the links in a sense that addition of any of them to the spanning tree would result in a cycle. These branches can carry state variables—independent currents—and since their number is minimal for each spanning tree, they form a minimal set of states for the NL. Thus, in order to partition Gg=(N,B) and obtain the required branch sets, the following weight function wL(b) is defined as
  • For j = 1 q , w L ( b j ) = { 0 , if L br ( j , j ) = 0 1 , if L br ( j , j ) 0 ( 2.81 )
  • Applying the minimum spanning tree algorithm (MinSTA) to Gg=(N,B) with weight function wL(b) a minimum spanning forest Gtrees g=(N,By) is obtained as before. Thereafter, the set of link branches Bx can be determined as in (2.16).
  • An important observation is that the network NL has no non-inductive loops if and only if there is a set Bx in which the number of branches equals the sum of their weights
  • j B x w L ( b j ) = size ( B x ) = s ( 2.82 )
  • Condition (2.82) in necessary and sufficient. The set Bx need not be unique, but any such set Bx, satisfying (2.82) is equivalent in a topological sense. That is, an arbitrary set of branches (that can be larger than Bx) may be chosen to represent state variables, such as independent currents, in NL if and only if it contains a set Bx satisfying topological condition (2.82). If however, (2.82) cannot be met, it follows that the given network is more than just a single inductive network. An algorithm for handling more than one network will be presented in later sections.
  • Applying the MinSTA with (2.81), assuming (2.82) holds, the order in which all branches are grouped and sorted according to (2.17) is obtained. This final branch order is related to the original branch order through a permutation matrix, which in this case is denoted as TL. Using this permutation matrix and the TCF, matrices Aa and Bb are found as usual. Thereafter, a vector of state variables for NL may be chosen to be a vector of independent currents, such that

  • ibr=Bb Tix  (2.83)
  • The corresponding state equation is obtained using the dimensionality reduction procedure discussed in Wasynczuk and Sudhoff. The procedure may be as follows. The voltage equation written for the network is multiplied from the left by the corresponding matrix of KVL and all vectors of branch currents are replaced with (2.83). The result is
  • i x t = - L x - 1 ( R x + L x t ) i x - L x - 1 B b e br ( 2.84 )
  • Capacitive Network State Equation Formulation
  • A capacitive network with augmented topology can be defined using the corresponding parameter set and an associated graph as

  • N C=(G,P C)  (2.85)
  • where the set PC is defined in (2.12).
  • Also, in the case of NC, the forest By of the associated Gtrees g=(N,By) can represent state variables the capacitor voltages. Thus, a slightly different topological approach might be used. In particular, a MaxSTA with a different weight function can be applied in order to find maximum spanning trees. This time, the weight function wC(bj) is defined such that
  • for i = 1 p w C ( b j ) = { 0 , if C br ( j , j ) = 0 1 , if C br ( j , j ) 0 ( 2.86 )
  • In this case, the network NC should not have non-capacitive tree-branches. Such a condition is satisfied if and only if there is a branch set By for which the following is true
  • b i B y w c ( b i ) = size ( B y ) = r ( 2.87 )
  • Condition (2.87) is also necessary and sufficient, and therefore, the discussion thereof applies to this case as well. Again, the set By need not be unique, but all such sets for which (2.87) holds are equivalent in a topological sense. If the condition (2.87) cannot be met, it can be shown that the corresponding network is not just NC but a union of the form NC∪NA. However, for the purpose of this section, a single capacitive network is considered.
  • Thus, applying the MaxSTA with weight function wC(b) and condition (2.87), a similar permutation matrix TC, can be formed such that multiplying Af from the right by TC the desired branch order (2.17) for the NC can be obtained. Further, matrices Aa, and Bb are formed exactly as before.
  • The natural state variables for NC are the capacitor voltages. Therefore, a vector of state variables vy can be chosen to be a vector of independent capacitor voltages such that

  • vbr=Aa Tvy  (2.88)
  • The state equation is very similar to (2.84) and is also obtained using the dimensionality reduction procedure applied to the current equation written in matrix form for NC. The result is
  • v y t = - C y - 1 ( G y + C y t ) v y + C y - 1 A a j br ( 2.89 )
  • where notations are analogous to those used in Wasynczuk and Sudhoff, supra. Specifically
  • G y = A a G br A a T ( 2.90 ) C y = A a C br A a T ( 2.91 ) C y t = A a ( C br t ) A a T ( 2.92 )
  • Transformation of Variables
  • An important observation regarding the selection of state variables for NL as well as for NC is that, once the set of states in (2.84) and (2.92) are obtained based on topological conditions specified previously, none of the actual states are required to be associated with any particular branch. That is, a topologically proper set of state variables in NL and NC can be transformed into an equivalent set by any non-singular coordinate transformation. Such a transformation of states is nothing more than a change of variables in the state space. To show this, it is useful to introduce time-varying coordinate transformation defined by two non-singular square matrices KL and KC of dimensions corresponding to the number of states in NL and NC, respectively. Multiplying (2.26) and (2.27) from the left by KC T and KL T respectively, KCL and KVL still hold:

  • KC TAaibr=0  (2.99)

  • KL TBbvbr=0  (2.100)
  • Thereafter, the new state variables for NL and NC are related to ix and vy as

  • ix=KLxL  (2.101)

  • vy=KCxC  (2.102)
  • And the corresponding branch currents in NL and the branch voltages in NC are expressed as

  • ibr=Bb TKLxL={tilde over (B)}b TxL  (2.103)

  • vbr=Aa TKCxCa TxC  (2.104)
  • Since the change of variables for the two networks is very similar, only the corresponding state equations for NL will be written explicitly. Proceeding as follows
  • x L t = - K L - 1 L x - 1 ( R x + L x t ) K L x L - K L - 1 t ( K L ) x L - K L - 1 L x - 1 B b e br = - ( K L - 1 L x - 1 R x K L + K L - 1 L x - 1 t ( L x ) K L + K L - 1 t ( K L ) ) x L - K L - 1 L x - 1 B b e br ( 2.105 )
  • Choosing KL to be a constant matrix, the result of transformation is
  • x L t = - K L - 1 L x - 1 ( R x + L x t ) K L x L - K L - 1 L x - 1 B b e br = - L ~ x - 1 ( R ~ x + L ~ x t ) x L - L ~ x - 1 B ~ b e br ( 2.106 )
  • where the reduced quantities with the subscript “x” and tilde above are defined using {tilde over (B)}b from (2.106) instead of just Bb. Since (2.84) and (2.106) are related through the similarity transformation via constant matrix KL, the eigenvalues of the resulting dynamic matrices in both state equations are the same. Also, since it is possible to choose KL to be a permutation matrix, it follows that the branch ordering does not change the eigenvalues of the system.
  • It is also interesting to observe what happens to (2.105) if the time-varying coordinate transformation is chosen to be

  • KL=Lx −1  (2.107)
  • In this case, the new state variables become fluxes

  • λx=KL −1ix=Lxix  (2.108)
  • Then, with the following equality
  • t ( K L - 1 ) K L + K L - 1 t ( K L ) = 0 ( 2.109 )
  • equation (2.105) reduces to state equation
  • λ x t = - R x L x - 1 λ x - B b e br ( 2.110 )
  • A similar change of variables can be performed on NC. The equations can be simplified even further by considering only constant capacitances, wherein a time invariant coordinate transformation KC can be used. Therefore, (2.105) may be rewritten as
  • x C t = - K C - 1 C y - 1 G y K C x C + K C - 1 C y - 1 A a j br = - C ~ y - 1 G ~ y + C ~ y - 1 A ~ a j br ( 2.111 )
  • where, as before, the reduced quantities with the subscript “y” and tilde above are defined using Ãa from (2.111) instead of Aa.
  • Even though KC in (2.100) can be any non-singular matrix, it is always possible to choose it in a way that the new state variables have particular physical significance. Similar to NL where the states can be transformed from currents ix to fluxes λx, in the case of NC the states may be transformed from voltages vy to charges qy by an appropriate choice of transformation. In particular, by defining the transformation as

  • KC=Cy −1  (2.112)

  • vy=Cy −1qy  (2.113)
  • The state equations (2.111) with capacitor charges being states becomes
  • q y t = - G y C y - 1 q y + A a j br ( 2.114 )
  • Multiple Networks
  • An algorithm for automatically deriving a system (or systems) of DAEs for time-domain simulation of many practically useful electrical networks will now be discussed. As noted previously, there are some conditions under which a global network with state variables, here denoted as

  • N=(G,P ζ), where ζ=L, C  (3.1)
  • cannot be structurally represented entirely only as NL or as NC. Specifically, an attempt to represent the network N structurally entirely as NL fails if there is a shortage of inductive link branches in the set Bx, On the other hand, the network N cannot be viewed only as NC if the corresponding forest of spanning trees lacks capacitive branches in the set By. In these cases, the global network cannot be viewed entirely as a single type of a network, and therefore, a more general approach is required.
  • A method of handling networks with arbitrary topologies is presented here based on the separation of N into interconnected networks. That is, if it is not possible to represent an entire circuit as a network of a single kind, it is necessary to identify some cuts of N that can be grouped together to form several networks based on their topological properties. Separation of N into NL NC, and NA in a way such that it is possible to obtain consistent state equations for NL, NC, and a system of algebraic equations for NA is, therefore, a generalization of the ASMG approach for circuits with arbitrary topologies. In this sense, a robust state selection algorithm is a key to such a generalization.
  • The selection of branches that can carry state variables in NL or NC is determined through a topological partitioning of the global graph Gg=(N,B) into a forest of trees and remaining link branches as

  • G g=(N,B)={G trees ,G links}  (3.2)

  • where

  • G trees=(N,B y)  (3.3)

  • G links=(N,B x)  (3.4)
  • In previous discussions devoted to networks with state variables, the actual partitioning of Gg=(N,B) was performed with two distinct objectives. First, with the help of the MinSTA and the weight function wL(b), it was possible to reorder branches such the that set of link-branches Bx could be chosen to represent state variables in NL. Second, using a similar topological approach based on MaxSTA and the weight function wC(b), it was shown that branches from the set of tree-branches By can represent state variables in NC. Thereafter, the conditions for the state equations to be complete were: for NL the corresponding set Bx must contain only inductive branches; and for NC the corresponding set By must contain only capacitive branches. These two branch sets could be obtained independently running MinST and MaxST algorithms with different weight functions. These topological conditions are stated in (2.82) and (2.87).
  • The technique of network identification and partitioning introduced here is based on the TCF of the node incidence matrix assembled for the global network. As it will be shown, the TCF makes topological information about N available in an “algebraic” sense for the further use in KCLs and KVLs. There are also some details about notation that are worth pointing out. The topological quantities referred to a particular branch order, such as in (2.17), will be distinguished by the hat sign, keeping in mind that it is always possible to transform them back to the original order using a corresponding permutation matrix. The superscripts “L”, “C”, and “A”, and the combinations thereof would be employed to relate variables and quantities to the inductive, capacitive, algebraic, and the overlapping networks, respectively.
  • Starting with NL, in will be shown how an algebraic network can be identified and how the corresponding system of algebraic equations can be assembled. It will also be shown that even in the presence of NA, a minimal and consistent state equation for NL can still be obtained following the same dimensionality reduction procedure set forth in Wasynczuk and Sudhoff. Then, similar derivations will be repeated for NC, in a somewhat simplified form, using its structural duality with respect to the inductive case. Finally, a way of obtaining a consistent system of DAEs relating all networks will be presented using the established framework.
  • Inductive and Algebraic Network Interconnection
  • Here, it is assumed that a network N=(G, PL) is constructed using branches shown in FIGS. 4A and 4C. This time, it is assumed that the MinSTA with weight function wL(b) is applied, and that in the end (2.82) does not hold. This implies that there is no way the set of links Bx can be chosen to contain only inductive branches, and
  • j B x w L ( b j ) = h < size ( B x ) ( 3.5 )
  • It follows from (2.17) and (3.5), there is at least one non-inductive branch in Bx addition of which to Gtrees=(N,By) would result in a cycle. This cycle would be entirely composed of non-inductive branches. Otherwise, the spanning tree corresponding to this link branch would not be minimal which, in turn, would contradict the result of the MinSTA. In general, addition of any branch from Glinks=(N,Bx,) to Gtrees=(N,By) results in a cycle. Therefore, the global graph Gg=(N, B) has as many non-inductive cycles as there are non-inductive branches in the branch set Bx. This property is based on the MinST and the weight function (2.13). Thus, based on (3.2), the total number of non-inductive cycles in Gg=(N,B) can be determined as

  • m=size(B x)−h=s−h  (3.6)
  • These cycles do not have state variables such as currents or fluxes. Therefore, none of the branches participating in such cycles can be placed in NL.
  • There are many methods that can be used to identify cycle(s) of a particular kind in a graph. Recall that the branches can be reordered similar to (2.17) and (2.18) by a corresponding permutation matrix obtained from the MinSTA. This time, it is necessary to reorder branches in (2.17) and columns in (2.18) in an even more sophisticated way. Specifically, since it is known which branches in Bx are non-inductive, they can be identified and put first on the left side in Alinks. Then, the branches in By that correspond to cycles linked by non-inductive branches in Bx are identified, and the block Atrees is resorted such that these columns appear on the right. Similar to (2.18) this final branch ordering can be expressed in terms of the node incidence matrix and permutation matrix as

  • AfTL=[Atrees L,Atrees A,Alinks A,Alinks L]  (3.7)
  • According to the system of notation employed, the superscript “L” denotes that the corresponding branches can be safely placed into inductive network NL, and the “A” identifies all other non-inductive branches, as viewed from NL, that belong to NA. Taking the RREF of (3.7) and removing zero rows on the bottom yields a TCF similar to the one introduced in (2.21).
  • A TCF = [ I η × η 0 0 A ^ η × h L 0 I μ × μ A ^ μ × m A C ^ μ × h A ] ( 3.8 )
  • where
  • h—is the number of inductive link-branches in the set Bx, as defined in (3.5);
  • m—is the number of non-inductive link-branches in Bx, as defined in (3.6);
  • η—is the number of tree-branches in the set By that are linked by the h inductive link-branches in Bx.
  • μ—is the number of tree-branches in By that are linked by the m non-inductive link-branches from Bx.
  • Also, η+μ=r, and m+h=s, where s and r are defined in (2.8) and (2.9), respectively. The corresponding subsets of tree-branches in Gtrees=(N,By) can be readily identified from the node incidence matrix Af in its TCF such as (2.21).
  • The reduced node incidence matrix and the basic loop matrix for the global network N are found as usual
  • A ^ a = A TCF ( 3.9 ) B ^ b = 0 m × η - ( A ^ μ × m A ) T I m × m 0 m × h - ( A ^ η × h L ) T - ( C ^ μ × h A ) T 0 h × m I h × h ( 3.10 )
  • Even though the TCFs (2.21) and (3.8) are different only in terms of the relative order of some columns (branches), expression (3.8) has more advanced structural properties that can be utilized in writing KCL and KVL for the sections of the global network. Specifically, KCL for NL can be written based on (3.9) as
  • [ I η × η 0 η × μ + m A ^ η × h L ] i ^ br = [ I η × η A ^ r - μ × h L ] i ^ br L = A ^ a L i ^ br L = 0 ( 3.11 )
  • Based on (3.10), KVL for NA can be derived as
  • [ 0 m × η - ( A ^ μ × m A ) T I m × m 0 m × h ] v ^ br = [ - ( A ^ μ × m A ) T I m × m ] v ^ br A = B ^ b A v ^ br A = 0 ( 3.12 )
  • It is interesting to observe that KCL (3.11) is a self-contained equation for NL. Similarly, KVL (3.12) contains only quantities relevant to NA. In this sense, these two equations are de-coupled.
  • The vector of state variables for NL, which is the vector of independent currents, is then chosen as
  • i ^ br L = [ - A ^ η × h L I h × h ] i x = ( B ^ b L ) T i x ( 3.13 )
  • In proceeding further, two more equations describing both networks are needed, specifically KVL for NL, and KCL for NA. First, from (3.10), the following KVL can be written
  • - ( A ^ η × h L ) T - ( C ^ μ × h A ) T 0 h × m I h × h v ^ br = 0 ( 3.14 )
  • which may be rewritten using notation for the two networks as
  • [ - ( A ^ η × h L ) T I h × h ] v ^ br L = [ ( C ^ μ × h A ) T 0 h × m ] v ^ br A B ^ b L v ^ br L = C ^ b LA v ^ br A ( 3.15 )
  • Similarly, from (3.9), KCL can be written as
  • [ 0 μ × η I μ × μ A ^ μ × m A C ^ μ × h A ] i ^ br = 0 and ( 3.16 ) [ I μ × μ A ^ μ × m A ] i ^ br A = - [ 0 μ × η C ^ μ × h A ] i ^ br L A ^ a A i ^ br A = C ^ a LA i ^ br L = - C ^ A i x ( 3.17 )
  • It is worth noting that (3.15) and (3.17) express the connection between the two networks. To be more specific, KVL (3.15) for NL now has an additional voltage source term that comes from the interconnection of NL and NA. Similarly, KCL (3.17) written for N has current source term which comes from NL.
  • The dimensionality reduction procedure for obtaining state equations for NL remains the same. That is, the voltage equation written in matrix form for the N is multiplied from the left by the corresponding KVL loop matrix, and the vector of branch currents is substituted with an expression of the form (2.83). Thus, using Bb L from the KVL (3.15) and the vector of state currents chosen as in (3.13), the state equation for NL becomes
  • i x t = - L x - 1 ( R x + L x t ) i x - L x - 1 B b L e br L - L x - 1 C b LA v br A ( 3.18 )
  • where the reduced quantities with the subscript “x” can be obtained as
  • R x = B b L R br L ( B b L ) T = [ - ( A ^ η × h L ) T 0 h × μ + m I h × h ] R ^ br [ - ( A ^ η × h L ) T 0 h × μ + m I h × h ] T ( 3.19 ) L x = B b L L br L ( B b L ) T = [ - ( A ^ η × h L ) T 0 h × μ + m I h × h ] L ^ br [ - ( A ^ η × h L ) T 0 h × μ + m I h × h ] T ( 3.20 ) R br = T L T R br T L ( 3.21 ) L br = T L T L br T L ( 3.22 )
  • It can be verified that Lx is non-singular, and that (3.18) is in fact a minimal state equation for NL with the total of h state variables. Also, note the additional source term on the right side of (3.18), which distinguishes (3.18) from (2.84).
  • It appears that in order to solve (3.18), the branch voltages for NA should be known. These voltages are functions of internal topology, parameters, and external sources. In general, it is necessary to compute both currents and voltages for all branches in NA. Here, KVL (3.12) and KCL (3.17) are utilized. In particular, suppose the currents of NA first, and then the branch voltages. However, in (3.17) there are fewer equations than branches in NA. Specifically, (3.17) provides μ equations with m+μ unknowns. That is, there should be m more equations, precisely one for each cycle in NA. On the other hand, NA has its own voltage equation, which may be expressed in terms of the branch order given by the columns of (3.7) as

  • {circumflex over (v)} br A ={circumflex over (R)} br A î br A br A  (3.23)
  • Substituting (3.23) into (3.11), the following result can be obtained

  • {circumflex over (B)} b A {circumflex over (R)} br A î br A=−{circumflex over (B)}b A ê br A  (3.24)
  • where êbr A is the vector of voltage sources corresponding to branches in NA and the branch order given by (3.7).
  • Combining (3.17) and (3.24), a complete system of m+μ equations for the branch currents of NA can be assembled in the following form
  • [ A ^ a A B ^ b A R ^ br A ] i ^ br A = - [ C ^ A 0 0 B ^ b A ] [ i x e ^ br A ] ( 3.25 )
  • If the branch resistances do not depend on currents, the system of equations (3.25) has the form Ax=b. In addition, if NA has time-invariant resistive branches, A is a constant nonsingular matrix. Therefore, for the time-invariant case, (3.25) can be solved by inverting the corresponding matrix A once for each new topology. Thereafter, the corresponding vector of branch voltages {circumflex over (v)}br A is found using (3.23). Finally, (3.18), (3.23), and (3.25) form a system of DAEs describing the two networks NL and NA.
  • Capacitive and Algebraic Network Interconnection
  • In this section, it is assumed that a network N=(G, PC) is constructed from branch models shown in FIG. 4. It is assumed that the MaxSTA with the weight function wC(b) is applied and that condition (2.87) does not hold. This result implies that the set of tree-branches By cannot be chosen to contain only capacitive branches, and
  • i B y w C ( b i ) = τ < size ( B y ) ( 3.26 )
  • Then, based on (2.17) and (3.26), the total number of non-capacitive tree-branches in the forest of maximum spanning trees can be determined as

  • ζ=size(B y)−τ=r−τ  (3.27)
  • When applying the MaxSTA, a permutation matrix that sorts columns of the full node incidence matrix Af similar to (2.18) is also assembled. Using (2.18) and the maximum spanning tree property, the branches in Alinks and Atrees are reordered in the following way.
  • First, all τ columns of Alinks, corresponding to capacitive branches are placed on the left of Alinks. Then, from the set Bx, the branches that are links to the capacitive trees in Gtrees=(N,Bx) are identified and the corresponding columns put on the right of Atrees. The final branch ordering with the corresponding permutation matrix yields a result similar to (3.7). That is

  • AfTC[Atrees C,Atrees A,Alinks A,Alinks C]  (3.28)
  • Another TCF can be obtained by taking the RREF of (3.28) and removing zero rows from the bottom. This TCF has the following structure
  • A TCF = [ I τ × τ 0 D ^ τ × t CA A ^ τ × z C 0 I ζ × ζ A ^ ζ × t A 0 ] ( 3.29 )
  • where
  • t—is the number of non-capacitive link-branches in the set Bx.
  • z—is the number of capacitive link-branches in the set Bx.
  • τ—is the number of capacitive tree-branches in the set By as defined in (3.26).
  • ζ—is the number of non-capacitive tree-branches in the set By as defined in (3.27).
  • Also, ζ+τ=r and t+z=s. If any of the dimensions are equal to zero, the corresponding block in (3.29) would simply disappear.
  • Based on the TCF (3.29), the reduced node incidence matrix and the basic loop matrix, both referred to the branch ordering defined in (3.28), are found as
  • A a = A TCF ( 3.30 ) B ^ b = - ( D ^ τ × t CA ) T - ( A ^ ζ × t A ) T I t × τ 0 ( A ^ τ × z C ) T 0 0 I z × z ( 3.31 )
  • Similar to (3.11)-(3.17), KCLs and KVLs can be written for the two networks. That is, writing KCL using (3.30),
  • [ I τ × τ 0 D ^ τ × t CA A ^ τ × z C ] i ^ br = 0 ( 3.32 ) [ I τ × τ A ^ τ × z C ] i ^ br C = [ 0 - D ^ τ × t CA ] i ^ br A A ^ a C i ^ br C = D ^ a CA i ^ br A and ( 3.33 ) [ I ζ × ζ A ^ ζ × t A ] i ^ br A = A ^ a A i ^ br A = 0 ( 3.34 )
  • Writing KVL based on (3.31),
  • - ( D ^ τ × t A ) T - ( A ^ ζ × t A ) T I t × t 0 t × z v ^ br = 0 ( 3.35 ) [ - ( A ^ ζ × t A ) T I t × t ] v ^ br A = [ ( D ^ τ × t A ) T 0 z × z ] v ^ br C B ^ b A v ^ br A = D ^ b CA v ^ br C and ( 3.36 ) - ( A ^ τ × z C ) T I z × z v ^ br C = B ^ b C v ^ br C = 0 ( 3.37 )
  • Selecting the vector of state variables for NC to be the vector of independent voltages vy, similar to (2.88), then
  • v ^ br C = I τ × τ ( A ^ τ × z C ) T v y = ( A ^ a C ) T v y ( 3.38 )
  • Expression (3.28) can also be written as

  • {circumflex over (B)} b A {circumflex over (v)} br A=({circumflex over (D)} A)T v y  (3.39)
  • Assuming that the vector of branch currents for network NA can be expressed in terms of the branch order (3.28) as

  • î br A br A {circumflex over (v)} br A −ĵ br A  (3.40)
  • the system of linear equations for {circumflex over (v)}br based on (3.34), (3.39), and (3.40) can be written as
  • [ B ^ b A A ^ a A G ^ br A ] v ^ br A = ( D ^ A ) T 0 0 A ^ a A [ v y j ^ br A ] ( 3.41 )
  • If the branch conductances do not depend on voltages, the system of equations (3.41) has the form Ax=b, with ζ+t equations and the same number of unknowns, and A being a full-rank square matrix.
  • Using the dimensionality reduction procedure in Wasynczuk and Sudhoff, the state equation for the NC in the presence of the algebraic network becomes
  • v y t = - C y - 1 ( G y + C y t ) v y + C y - 1 A a j br - C y - 1 D a CA i br A ( 3.42 )
  • where the reduced quantities are defined as
  • C y = A a C C br ( A a C ) T = [ I τ × τ 0 τ × ζ + t A ^ τ × z C ] C ^ br [ I τ × τ 0 τ × ζ + t A ^ τ × z C ] T ( 3.43 ) G y = A a C G br ( A a C ) T = [ I τ × τ 0 τ × ζ + t A ^ τ × z C ] G ^ br [ I τ × τ 0 τ × ζ + t A ^ τ × z C ] T ( 3.44 )
  • and

  • Ĉbr=TL TCbrTL  (3.45)

  • Ĝbr=TL TGbrTL  (3.46)
  • Inductive, Capacitive, and Algebraic Network Interconnection
  • Heretofore, both types of network, namely NL and NC, have been considered with a shortage of branches that can carry state variables. In both cases, the corresponding minimal state equations (3.18) and (3.42) were completed by adding extra source terms that came about due to the algebraic part of the corresponding network. Also, both types of the final systems of DAEs are structurally similar due to their dual circuit. These structural properties can be utilized even further in obtaining a system of DAEs for the global network N=(G, P) that includes NL, NC, and NA. In doing so, there are two approaches.
  • In the first approach, NL and NA are considered as was done in the beginning of this chapter, whereupon NC is incorporated into the existing structure. In doing so, capacitor voltages of NC can be mapped into ebr L and vbr A in (3.18). A second approach would consist of adding NL to the structure developed for the NC and NA, and mapping inductor currents into jbr C and ibr A in (3.42). Since both methods yield equivalent results, choosing either one is a matter of pure convenience. Following the order in which the material was presented earlier, preference is given to the first approach.
  • Suppose that the MinSTA is applied resulting in (3.5) from which the branch order (3.7) and the TCF (3.8) are obtained. In the most general case, NC could have its branches anywhere in Gtrees=(N,By) and among any non-inductive link-branches in Glinks=(N,Bx). Relative to (3.7), these branches could be columns corresponding to blocks Atrees L, Atrees A, and Alinks A. The capacitive branches corresponding to any of the columns in Atrees L represent the part of NC that overlaps with NL. As a result of such overlapping, each of such capacitive branches is going to have its own independent state variable within NC. In fact, the branch voltages corresponding to such capacitive branches can be viewed as independent voltage sources ebr L present in (3.18). The remaining capacitive branches are viewed as a part of an algebraic network for NL, and therefore, are going to be represented in the columns of blocks Atrees A and Alinks A as given in (3.7).
  • Now, the challenge is to reorder columns (branches) in (3.7) taking into consideration the capacitive network. The procedure of reordering columns is very similar to the two previous cases. In particular, all columns corresponding to the non-capacitive branches in Atrees L are identified and placed on the left side of this block. Then, the trees of capacitive branches that form a capacitive network need to be separated. In order to achieve this, the MaxSTA with weight function wC(b) is applied to the branches in Atrees A and Alinks A. Then, with the result (3.26), the columns of this block are again sorted similar to (3.28). Thereafter, the final branch ordering with corresponding permutation matrix may be expressed as

  • AfTLCA=[Atrees L,Atrees LC,Atrees C,Atrees A,Alinks A,Alinks CA,Alinks L]  (3.47)
  • The permutation matrix TLCA in (3.47) sorts branches of the global network N in groups with very specific topological properties corresponding to different networks. Again, taking the RREF of (3.47) and removing the zero rows, a TCF with the following structure is produced
  • A TCF = [ I η × η L 0 0 0 0 0 A ^ η × h L 0 I μ × μ LC 0 0 0 0 A ^ μ × h LC 0 0 I τ × τ CA 0 D ^ τ × t CA A ^ τ × z C C ^ τ × h LC 0 0 0 I ζ × ζ A A ^ ζ × t A 0 C ^ ζ × h LA ] ( 3.48 )
  • where
  • η—is the number of non-capacitive tree-branches in the set By that are also placed in NL.
  • μ—is the number of capacitive tree-branches in By that are placed in NL and NC.
  • τ—is the number of capacitive tree-branches in By that are placed in NC.
  • ζ—is the number of non-capacitive tree-branches in By that are placed in NA.
  • t—is the number of non-capacitive link branches in Bx, that are placed in NA.
  • z—is the number of capacitive link-branches in Bx, that are placed in NC.
  • h—is the number of inductive link-branches in the set Bx that are placed in NL.
  • The reduced node incidence matrix and the basic loop matrix for the global network referred to the branch order (3.47) are then found as
  • A a = A TCF ( 3.49 ) B ^ b = [ 0 0 - ( D ^ τ × t CA ) T - ( A ^ ζ × t A ) T I t × t 0 0 0 0 - ( A ^ τ × z C ) T 0 0 I z × z 0 - ( A ^ η × h L ) T - ( A ^ μ × h LC ) T - ( C ^ τ × h LC ) T - ( C ^ ζ × h LA ) T 0 0 I h × h ] ( 3.50 )
  • From this point, it is possible to proceed as usual. KCL for NL using (3.49) can be written as
  • [ I η × η L 0 0 η × τ + ζ + t + z A ^ η × h L 0 I μ × μ LC 0 μ × τ + ζ + t + z A ^ μ × h LC ] i ^ br = 0 ( 3.51 ) [ I η × η L 0 A ^ η × h L 0 I μ × μ LC A ^ μ × h LC ] i ^ br L = A ^ a L i ^ br L = 0 ( 3.52 )
  • For NL and NA,
  • [ 0 ζ × η + μ + τ I ζ × ζ A A ^ ζ × t A 0 ζ × z C ^ ζ × h LA ] i ^ br = 0 ( 3.53 ) [ I ζ × ζ A A ^ ζ A × t ] i ^ br A = [ 0 ζ × η + μ + τ - C ^ ζ × h LA ] i ^ br L A ^ a A i ^ br A = - C ^ a LA i ^ br L ( 3 , 54 )
  • Finally, KCL relating NC to the two remaining networks is
  • [ 0 μ × η I μ × μ LC 0 0 0 0 A ^ μ × h LC 0 τ × η 0 τ × μ I τ × τ CA 0 τ × ζ D ^ τ × t CA A ^ τ × z C C ^ τ × h LC ] i ^ br = 0 ( 3.55 ) [ I μ × μ LC 0 0 0 τ × μ I τ × τ CA A ^ τ × z C ] i ^ br C = [ 0 μ × η + μ - A ^ μ × h LC 0 τ × η + μ - C ^ τ × h LC ] i ^ br L + [ 0 μ × ζ 0 μ × t 0 τ × ζ - D τ × t ] i ^ br A A ^ a C i ^ br C = - C ^ a LC i ^ br L - D ^ a CA i ^ br A ( 3.56 )
  • Based on (3.50), KVL can be written in the following way. First, for the capacitive network

  • ┌0z×η+μ−(Â τ×z C)T0z×τ+t I z×z0z×h ┐{circumflex over (v)} br=0  (3.57)

  • ┌−(Â τ×z C)T I z×z ┐{circumflex over (v)} br C ={circumflex over (B)} b C {circumflex over (v)} br C=0  (3.58)
  • Then, for NC and NA
  • 0 τ × η + μ - ( D ^ τ × t CA ) T - ( A ^ ζ × t A ) T I t × t 0 t × z + h v ^ br = 0 ( 3.59 ) [ - ( A ^ ζ × t A ) T I t × t ] v ^ br A = [ 0 t × μ ( D ^ τ × t CA ) T 0 t × z ] v ^ br C B ^ b A v ^ br A = - D ^ b CA v ^ br C ( 3.60 )
  • Finally, KVL relating all three networks can be written as
  • - ( A ^ η × h L ) T - ( A ^ μ × h LC ) T - ( C ^ τ × h LC ) T - ( C ^ ζ × h LA ) T 0 h × t + z I h × h v ^ br = 0 ( 3.61 ) [ - ( A ^ η × h L ) T 0 h × μ I h × h ] v ^ br L = [ ( A ^ μ × h LC ) T ( C ^ τ × h LC ) T 0 h × z ] v ^ br C + ( C ^ ζ × h LA ) T 0 2 v ^ br A = B ^ b L v ^ br L = - C ^ b LC v ^ br C - C ^ b LA v ^ br A ( 3.62 )
  • As before, the vector of state variables for NL is selected as a vector of independent currents such that
  • i ^ br L = [ - A ^ η × h L - A ^ μ × h LC I h × h ] i x = ( B ^ b LC ) T i x ( 3.63 )
  • Note that in (3.63), it is possible to use {circumflex over (B)}b L from (3.62) in order to avoid computing currents of capacitive branches.
  • Similar 016410-112 to (3.38), the vector of states for NC is chosen to be a vector of independent capacitor voltages such that
  • v ^ br C = I μ × μ LC 0 0 I τ × τ CA 0 ( A ^ τ × z C ) T v y = ( A ^ a C ) T v y ( 3.64 )
  • Based on definition (3.63), expression (3.54) becomes
  • A ¨ a A i ^ br A = - C ¨ a LA i ^ br L = - C LA i x ( 3.65 )
  • Also, based on (3.63) and (3.56), it follows that
  • [ 0 μ × η + μ - A ^ μ × h LC 0 τ × η + μ - C ^ τ × h LC ] i ^ br L = [ - A ^ μ × h LC - C ^ τ × h LC ] i x = - D LC i x ( 3.66 )
  • which then can be used to rewrite the compact form of (3.56) as
  • A ^ a C i ^ br C = - C ^ a LC i ^ L br - D ^ a CA i ^ br A = - D LC i x - D ^ a CA i ^ br A ( 3.67 )
  • Similarly, based on (3.64), KVL (3.60) and (3.62) can be rewritten as
  • B ^ b A v ^ br A = - D ^ b CA v ^ br C = - D CA v y ( 3.68 ) ( A ^ μ × h LC ) T ( C ^ τ × h LC ) T 0 h × z v ^ br C = ( A ^ μ × h LC ) T ( C ^ τ × h LC ) T v y = - C LC v y ( 3.69 ) B ^ b L v ^ br L = - C ^ b LC v ^ br C - C ^ b LA v ^ br A = - C LC v y - C ^ b LA v ^ br A ( 3.70 )
  • Note that KCL (3.67) written for NC and KVL (3.70) written for NL are coupled through the corresponding interconnection matrices that are related to each other as

  • C LC=−(D LC)T  (3.71)
  • Using (3.70), (3.67), and the dimensionality reduction procedure of Wasynczuk and Sudhoff, the state equations for inductive and capacitive networks become
  • i x t = - L x - 1 ( R x + L x t ) i x - L x - 1 C LC v y - L x - 1 C b LA v br A - L x - 1 B b L e br L ( 3.72 ) v y t = - C y - 1 ( G y + C y t ) v y - C y - 1 D LC i x - C y - 1 D a CA i br A + C y - 1 A a C j br C ( 3.73 )
  • where all reduced quantities with the subscripts “x” and “y” can be computed very much like the ones in (3.19), (3.20), (3.43), and (3.44) using the corresponding KVL matrix Bb L and KCL matrix Aa C. As before, it can be verified that (3.72) and (3.73) are the minimal state equations for the NL and NC, respectively. The reduced quantities Lx and Cy are indeed non-singular matrices for any practical network.
  • As before, in order to solve (3.72) and (3.73) for the state derivatives, the branch currents and voltages of the algebraic network must be computed first. Here, it is reasonable to assume that NA contains non-empty resistive branches that are modeled as depicted in FIGS. 4( a) and (b), with voltage or current sources, respectively.
  • Since a single branch cannot contain both types of sources and assuming that each branch of NA has a nonzero resistor, the voltage equation for NA can be written as

  • {circumflex over (v)} br A ={circumflex over (R)} br A(î br A br A)+ê br A  (3.74)
  • Then, in order to obtain a system of equations for NA, KCL (3.65) and KVL (3.68) should be utilized. Thus, substituting (3.74) into (3.68) and combining the result with (3.65), the following system of equations is obtained
  • [ A ^ a A B ^ b A R ^ br A ] i ^ br A = - C LA i x - D CA v y - B ^ b A ( R ^ br A j ^ br A + e ^ br A ) ( 3.75 )
  • which has ζ+t equations and the same number of unknowns. If {circumflex over (R)}br A does not depend on branch currents, the system (3.75) would have the form Ax=b with A being a full rank square matrix. Thereafter, (3.75) can be solved for the vector of currents îbr A, which is then used in (3.74) in order to compute the voltages {circumflex over (v)}br A Finally, (3.72)-(3.75) represent a consistent system of DAEs for the three elementary networks NL, NC, and NA.
  • Elementary Network Identification Procedure
  • For large electrical networks, storing the associated multi-graphs in matrix format leads to sparse matrices of accordingly large dimensions. Even though the actual topological matrices used for elementary networks as KCLs and KVLs are smaller in size, these matrices are sparse and their dimensions are proportional to the size of the elementary networks. However, in order to gain computational efficiency, it is desirable to employ techniques that would avoid multiplication of large sparse matrices. A direct way to reduce the unnecessary operations is to use the techniques developed for sparse algebra in all computations involving topological matrices. Essentially this would lead to component-wise operations involving all of the network branch parameters. Alternatively, it is possible to use a more compact technique for graph representation such as collections of adjacency lists instead of adjacency matrices. Such lists can be conveniently implemented on a computer, which in turn would enable the usage, of more efficient Min/MaxST algorithms. Representation of graphs in terms of compact data structures also suggests reformulation of the network identification procedure using ordered sets instead of topological matrices.
  • Input Data
  • Given a general electrical network N, the associated graph is defined in terms of node and branch sets as G=(N,B). The graph G must be closed (G must form a valid circuit), which implies the following
  • (1) each branch in B is a member of some cycle composed of more than one branch (self-loops are not allowed)
  • (2) The entire branch set B can be partitioned into two subsets By and Bx such that they have no branches in common, set By has a minimal size and spans the entire node set N. Moreover, Gtrees=(N,By) is a spanning tree. If G is a multi-graph, then GT is a spanning forest (forest of spanning trees). Also, based on the parameter set P, the weight functions ωL(2.81) and ωC (2.86) must be defined over the entire branch set B. Then, depending on the order in which the elementary networks are to be found, a network identification procedure can be formulated in four major steps. Two procedures will now be discussed for the different order of network identification.
  • L-C-A Procedure Step 1: Call the Minimum Spanning Tree Algorithm

  • MinSTA(G,w L)=
    Figure US20090012770A1-20090108-P00001
    G trees=( N,B y)  (3.80)
  • Given By from (3.80), the set of link-branches is defined as

  • B x =B−B y  (3.81)
  • Branches in Bx for which ωL(bj)=1 (inductive link-branches) are identified defining the set Bx L. Then, the set of link-branches is re-ordered as follows

  • Bx={{tilde over (B)}x CA,Bx L}  (3.82)
  • where {circumflex over (B)}x CA is a temporary set of non-inductive link-branches.
    Step 2: Bx L is removed from the global branch set B and the remaining closed graph is defined {tilde over (G)}=(Ñ, {tilde over (B)}CA). Then, it is necessary to find all the tree-branches in By that are linked by {tilde over (B)}x CA. Let this temporary branch set be denoted as {tilde over (B)}y CA. Thereafter, the branch set for G is found as

  • {tilde over (B)}CA={tilde over (B)}y CA,{circumflex over (B)}x CA  (3.83)
  • Based on the MinST property, it can be proved that none of the branches in {tilde over (B)}CA are inductive. The branch set By can be partitioned into the following sets

  • By={By LC,{tilde over (B)}y CA}  (3.84)
  • Since some of the branches in By LC may be capacitive, this branch set may be partitioned further as

  • By LC={By L,By C 1 }  (3.85)
  • Also note that since the branch set {tilde over (B)}CA (3.83) forms a closed graph, the remaining branches {{tilde over (B)}y LC,Bx L} do not, unless {tilde over (B)}CA and {{tilde over (B)}y LC, Bx L} are sets of galvanically disjointed branches.
  • Step 3: The Maximum Spanning Tree Algorithm is Applied

  • MaxSTA({tilde over (G)},w C)=
    Figure US20090012770A1-20090108-P00001
    {tilde over (G)} trees=(Ñ,B y CA)  (3.86)
  • Since By CA has an optimal property, it is no longer a temporary set. Also, due to such possible improvement, this set may be different from the one used in (3.83). Proceeding further, all branches in By CA for which ωC(bj)=1 (capacitive branches) are identified and placed into a separate set here denoted as By C 2 Then By CA; can be partitioned as

  • By CA={By C 2 ,By A}  (3.87)
  • The set of link-branches is retrieved as

  • B x CA ={tilde over (B)} CA −B y CA  (3.88)
  • Step 4: By A is removed from the global branch set By CA and the remaining closed graph is denoted Ĝ=({circumflex over (N)},{circumflex over (B)}C). Next, all link-branches in Bx that correspond to tree-branches in By C 2 are found. Let this branch set be denoted as Bx C. This set actually may or may not have capacitive branches, but all of its branches are links to capacitive trees in By C 2 . The remaining link-branches can be found as

  • B x A =B x CA −B x C  (3.89)
  • Then, the following set partitioning can be formed

  • By CA={By C 2 ,By A}  (3.90)

  • Bx CA={Bx A,Bx C}  (3.91)
  • Based on the property of MaxSTA, it is possible prove that Bx A has no capacitive branches.
  • Finally, the global branch set B can be reordered in terms of the sets identified so far as

  • B={By L, By C 1 , By C 2 , By A,Bx A, Bx C, Bx L}  (3.92)
  • Thereafter, the elementary networks are formed based on the branch sets as follows

  • {By L, Bx L}∈NL  (3.93)

  • {By C 1 ,By C 2 ,Bx C}∈NC  (3.94)

  • {By A,Bx A}∈NA  (3.95)
  • The branch order established in (3.92) corresponds to the TCF (3.48) with the block matrices having dimensions of their respective sets in (3.93)-(3.95).
  • C-L-A Procedure Step 1: The Maximum Spanning Tree Algorithm is Applied

  • MaxSTA(G,w C)=
    Figure US20090012770A1-20090108-P00001
    G trees=(N,B y)  (3.96)
  • The set of link-branches becomes

  • B x =B−B y  (3.97)
  • All capacitive branches in By are identified. This set is then re-sorted as follows

  • By={By C,{tilde over (B)}y LA}  (3.98)
  • where By C is the largest set of capacitive tree-branches, {tilde over (B)}y LA is a temporary set of non-capacitive tree-branches.
    Step 2: All link-branches in Bx corresponding to trees in {tilde over (B)}y LA are found. This set of link-branches is denoted as {tilde over (B)}x LA. The combined set

  • {tilde over (B)}LA={{tilde over (B)}y LA,{tilde over (B)}x LA}  (3.99)
  • has no capacitive branches, moreover (3.99) can form a closed graph {tilde over (G)}=(Ñ,{tilde over (B)}LA).
  • The remaining link-branches become

  • B x LC =B x −{tilde over (B)} x LA  (3.100)
  • Among the branches of Bx LC, there may be some which are inductive. These inductive link-branches can be identified and placed into set denoted Bx L 1 . After that, Bx LC can be re-sorted as

  • Bx LC={Bx L 1 ,Bx C}  (3.101)
  • where Bx C includes the remaining non-inductive link-branches that may or may not be capacitive.
  • Step 3: The Minimum Spanning Tree Algorithm is Called

  • MinSTA({tilde over (G)},w L)=
    Figure US20090012770A1-20090108-P00001
    {tilde over (G)} trees=(N,B y LA)  (3.102)
  • The corresponding set of link-branches is

  • B x LA ={tilde over (B)} LA −B y LA  (3.103)
  • In the set Bx LA, all inductive branches are placed in the set of inductive link branches denoted as Bx L 2 . The remaining non-inductive links are

  • B x A =B x LA −B x L 2   (3.104)
  • Thereafter, the set Bx LA can be reordered as

  • Bx LA={Bx A,Bx L 2 }  (3.105)
  • Step 4: Bx L 2 is removed from the branch set {tilde over (B)}LA. The remaining closed graph becomes Ĝ=({circumflex over (N)},{circumflex over (B)}A). In order to accomplish this, it is necessary to identify all tree-branches in By LA that are linked by Bx A. This branch set is denoted as By A. Thereafter, By LA can be partitioned as

  • By LA{By L,By A}  (3.106)
  • where By L is found as usual

  • B y L =B y LA −B y A  (3.107)
  • It is important to note that none of the branches in {By A, Bx A} are inductive or capacitive.
  • Finally, the global branch set B can be re-ordered in terms of the smaller sets. Let the order be given as follows

  • B={By C,By L,By A,Bx A,Bx L 2 ,Bx L 1 ,Bx C}  (3.108)
  • The elementary networks are formed based on the branch sets as follows

  • {By C,Bx C}∈NC  (3.109)

  • {By L,Bx L 2 ,Bx L 1 }∈NL  (3.110)

  • {By A,Bx A}∈NA  (3.111)
  • Based on the branch order established in (3.108), the TCF of the node incidence matrix has the following structure
  • A TCF = [ I τ × τ C 0 0 D ^ τ × t CA D ^ τ × h L 2 A ^ τ × k L 1 A ^ τ × z C 0 I η × η L 0 0 A ^ η × h L 0 0 0 0 I ζ × ζ A A ^ ζ × t A C ^ ζ × h LA 0 0 ] ( 3.112 )
  • where blocks have dimensions corresponding to the respective branch sets in (3.108). In particular
      • τ=size (By C), as determined in (3.98)=
      • η=size (By L), as determined in (3.107)
      • ζ=size (By A), as determined in (3.106)=
      • t=size (Bx A), as determined in (3.104)
      • h=size(Bx L 2 ), as determined in (3.105)
      • k=size (Bx L 1 ), as determined in (3.101)
      • z=size(Bx L), as determined in (3.101)
  • The previous procedures are the techniques for sequential identification of elementary networks. Depending on the relative order in which the networks are identified from the global graph, the sequence of steps in the procedure may differ. Similar procedures may be constructed in which the order of elementary network identification is different from the two cases considered above. However, as it is expected, the results of such procedures are topologically equivalent.
  • General Interconnected Networks
  • In the previous chapter, electrical circuits constructed of branches with components selected from one of the parameter sets (2.10)-(2.13) were considered. The topological conditions under which a circuit of a particular kind can be modeled as a single network of an appropriate type were also presented. Such differentiation was motivated on a basis of the type of equation and state variables (if any) required to describe the respective network. Furthermore, it was shown that as more freedom is allowed in terms of the circuit components and their topology, it is no longer possible to describe the entire circuit as a single type of network. Instead, for the circuits composed of branch models FIG. 4A, it was necessary to partition the global network N as

  • N=NL∪NA  (3.113)
  • In the case of circuits constructed from branch models FIG. 1.2( b), the necessary partitioning was of the form

  • N=NC∪NA  (3.114)
  • Finally, when both types of branch models FIGS. 4( a) and (b) were considered at the same time, it was shown that the circuit can be consistently described by viewing the corresponding global network as being partitioned as

  • N=(N L ∪N LC ∪N C)∪N A  (3.115)
  • Even though (3.115) comes directly from the TCF (3.48), it may also be written in simplified form as

  • N=NL∪NC∪NA  (3.116)
  • Several observations can also be made about the actual coupling between DEs (3.72)-(3.73). The DE (3.72) has currents as state variables, and voltages as its driving input. The states in DE (3.73) are voltages, and it is the currents that represent the external driving force. Therefore, these two state equations are said to be input-output compatible. Extending this framework, models NL and NC can be viewed as interconnected dynamical systems each of which is represented by its own state equation that are coupled through their respective inputs and outputs and, for certain topologies, through an additional interconnection term due to the AE corresponding to NA.
  • Generic Network Identification
  • Sometimes it may also be useful to partition the global system into several networks based not only on branch contents but also on some other additional constraints or criteria that may come from the description of the actual system and the topological layout of its sections. The property that must be maintained in any network partitioning is the ability to automatically produce KCL and KVL matrices that can be used to describe the corresponding networks and the coupling among them.
  • In constructing the KCL and KVL matrices, an algorithm for assembling the TCF is needed. Previously, in order to assemble the TCF, the multi-graph Gg=(N, B) was searched for a forest of spanning trees Gtrees g=(N,By) with some particular topological properties reflected in an appropriate weight function. This very technique may be extended to accommodate other constraints in addition to those which come from branch components. For instance, it may be possible to have several, possibly pre-specified or guessed, networks. Then, a generic weight function associated with the branch set B can be defined as follows
  • for j = 1 q ζ = L , C , A ξ = 1 size ( P ζ ) ω ζ ξ ( b j ) = { 0 , if P ζ { ξ } ( j ) = 0 1 , if P ζ { ξ } ( j ) 0 ± , if b j N ζ k ( 3.119 )
  • where Pζ{ξ} is to be understood as ξ-th member of the corresponding parameter set Pζ as defined in (2.10)-(2.13). If the member of Pζ is a matrix, then by convention Pζ{ξ} (j) is the j-th diagonal entry of the corresponding matrix. Similarly, when Pζ{ξ} is a vector, then the notation Pζ{ξ} (j) is the j-th element. The N ζ k is the k-th network to be identified. The ± sign in front of the infinity symbol in (3.119) is taken such that the branch bj∉Nζ k is not likely to be included in Gtrees g=(N,By) whether the search is performed for the MinST or MaxST.
  • Furthermore, it is desirable to assemble separate systems of DAEs along with the corresponding coupling terms for some, possibly pre-specified sections of a larger network. Ideally, the network should be partitioned in such a way that the number of variables coupling the corresponding systems of DAEs is minimized. If such a goal is feasible and the number of state variables in each of the DEs is significantly larger than the number of coupling variables, the corresponding networks may be viewed as being weakly coupled. In terms of the topology of weakly coupled networks, it is reasonable to expect that the number of common or connecting branches is small.
  • Thus, a generic step of partitioning the network N into two smaller ones may be considered as

  • N=N1∪N2  (3.120)
  • In doing so, the columns in the node incidence matrix would also be reorganized such that the corresponding TCF would have a particular block structure needed to assemble the KCL and KVL matrices on a network-by-network basis. In this regard, two types of structures of the right hand side of TCF have been heretofore encountered: lower-block triangular as in (3.8); and upper-block triangular as in (3.29). Even though these two TCFs were computed based on different topological objectives, they are in fact equivalent. Therefore, without loss of generality, for the network partitioning (3.120), the lower-block triangular TCF may be considered. Based of such structure for the TCF, the KCL and KVL matrices would have the following form
  • A a = A TCF = [ I 1 0 0 A 1 0 I 2 A 2 A 21 ] ( 3.121 ) B b = [ 0 - ( A 2 ) T I 2 0 - ( A 1 ) T - ( A 21 ) T 0 I 1 ] ( 3.122 )
  • If the partitioning (3.120) is final, then (3.121)-(3.122) would already require a particular choice of state variables for each network. That is, if a single set of state variables is to be used to completely describe a network, it is necessary to have a KCL or a KVL (whichever is appropriate for the selected states) to include only the branches corresponding to this network. With respect to (3.120), it appears from (3.121) that it is possible to write KCL including only the branches of NJ. Therefore, one can select the set of branches corresponding to the block Al in (3.121) to represent states such as independent currents. Such a choice of state variables would require N, to be an inductive network. Relative to network N2, it is possible to write an independent KVL based on (3.122). This suggests a set of independent voltages to be the states and the network to be capacitive. If the TCF with its right-hand side being block diagonal (instead of block triangular) then the networks corresponding to these blocks would be topologically disconnected. In fact, for the multi-graph Gg=(N,B), the TCF would have as many diagonal blocks as there are graphs in Gg.
  • On the other hand, if partitioning (3.120) is not final, it is possible to recursively apply similar sub-network identification procedures to either or both N1 and N2 each time updating the TCF. Ideally, this procedure could be continued until a final collection of elementary networks is obtained. The result of this sequential network partitioning may be symbolically expressed as

  • N=N N . . . =N . . . u  (3.123)
  • The corresponding TCF can be expressed in the general lower block-triangular
  • A TCF = [ I 1 A 1 I 2 A 2 A 21 I k A k A k 2 A k 1 I n A n A nk A n 2 A n 1 ] ( 3.124 )
  • Based on the TCF (3.124), and following the usual procedure, it is possible to assemble the KCL and KVL matrices for the global network as well as for each of the networks present in (3.123). Thus, based on (3.124), the KCL matrix for the k-th network (that is, the self-KCL matrix) is defined as

  • a [k Ak]  (3.125)
  • Then, the corresponding KCL coupling matrix relating the k-th and i-th networks can be written from the k-th row of the TCF as

  • Aa ki=[0Aki]  (3.126)
  • Using definitions (3.125)-(3.126), the KCL equations for all networks in (3.123) can be expressed as
  • A a k i br k + i = 1 k - 1 A a ki i br i = 0 , for k = 1 , 2 , , n ( 3.127 )
  • where for k=1 the summation is defined to be zero. Furthermore, based on the structure of (3.126), it can be seen that only currents of the link branches of the other networks are needed in (3.127). Thus, defining new matrices as

  • Da ki=Aki  (3.128)
  • the KCL equations (3.127) can be rewritten as
  • A a k i br k + i = 1 k - 1 D a ki i x i = 0 , for k = 1 , 2 , , n ( 3.129 )
  • The KVL matrices are defined in a similar way. In particular, the KVL self-matrix for the k-th network is determined from (3.124) to be

  • B b k=[−(A k)T I k]  (3.130)
  • The KVL matrix coupling the k-th and i-th networks is assembled as

  • B b ki=[−(A ik)T0 ]  (3.131)
  • Thereafter, the KVL equations relating all networks in (3.123) can be written as
  • B b k v br k + i = k + 1 n B b ki v br i = 0 , for k = 1 , 2 , , n ( 3.132 )
  • with the convention that for k=n, the summation is defined to be zero. Noting that only voltages corresponding to the tree branches of the other networks are needed in (3.132), the following definition can be made

  • C b ki=−(A ik)T  (3.133)
  • Then (3.132) can be rewritten with only the tree voltages under summation as
  • B b k v br k + i = k + 1 n C b ki v y i = 0 , for k = 1 , 2 , , n ( 3.134 )
  • It can also be noted that (3.128) and (3.133) are related very much like the similar matrices in (3.71).
  • Furthermore, based on the KCL equations (3.129) and KVL equations (3.134) it is possible to express the state equations for general interconnected networks (3.123). In order to do this it is necessary to assume that there is no mutual coupling among branches belonging to different networks. As usual, the derivation goes through the dimensionality reduction procedure applied to each network in (3.123) that has state variables. Since there are two types of such networks, namely inductive and capacitive, there are only two types of state equations involved. In particular, if for some k, the network type index is ζ=L, meaning that network Nζ k, is inductive, then the corresponding state equation is
  • L x k i x k t = - ( R x k + L x k t ) i x k - B b k e br k - i = k + 1 n C b ki v y i ( 3.135 )
  • where all the reduced matrices are defined as before in terms of Bb k. Of course, in order to solve all DEs for multiple inductive networks simultaneously and without additional constraints, all state equations (3.135) must be input-output compatible. That is, for all i in (3.135) for which the network type index is ζ=L, it is necessary to have Bb ki=0, implying that the external forcing voltage may come only from other non-inductive networks. If it is not possible to satisfy this condition for two or more inductive networks in (3.123), then those networks should be combined into one. Similarly if, for some k, ζ=C, then network Nζ k is capacitive, and the corresponding state equation is
  • C y k v y k t = - ( G y k + C y k t ) v y k + A a k j br k - i = 1 k - 1 D a ki i x i ( 3.136 )
  • Here, all reduced matrices are defined in terms of Aa k. Also, in order to be able to simultaneously model multiple capacitive networks without additional constraints, all state equations (3.136) must be input-output compatible. That is, all external forcing currents should come only from other non-capacitive networks, which implies that for all i in (3.135) for which ζ=C, it is necessary to have Aa ki=0. Whenever it is not possible to satisfy this condition for some capacitive networks in (3.123), then those networks should be re-combined. The relation of multiple elementary networks is symbolically depicted in FIG. 12. There, the external voltage and current sources are denoted as eext and jext. Also, none of the inductive networks are connected with each other directly. Similarly, the capacitive networks are not coupled among each other through their branch variables. However, all networks are interconnected in such a way that each network with state variables has its proper inputs and outputs. On the other hand, algebraic networks may accommodate any inputs and outputs that are required by respective inductive and capacitive networks. Such an arrangement of networks allows all DAEs to be solved simultaneously.
  • The TCF (3.124) has a very general structure applicable to interconnected networks. The KCLs (3.127) and KVLs (3.132) also reflect possible coupling among all networks in (3.123) through their branch currents and voltages. If many of the networks in (3.123) are mutually de-coupled or weakly coupled, it can be expected that many of the block matrices with double subscripts in (3.124) are zero. For instance, if it is possible to make the right side of the TCF (3.124) block-bi-diagonal, the corresponding networks in (3.123) would be sequentially connected. Another useful way of formulating the relations among the networks is to have one (or maybe several) specific networks that represent all interconnections. In terms of the right side of the TCF, an attempt would be made to form a block diagonal structure as far down as possible by appropriately selecting the networks.
  • Selective Network Partitioning
  • When modeling a switched electrical network N=(G, P, S), it may be of practical interest to partition N into more than just three elementary networks NL, NC, and NA. In general, the network partitioning may be of a more advanced form such as (3.123) where the branches may be assigned to networks based on some additional constraints. For instance, it is reasonable to establish additional constraints such that some of the networks of (3.123) remain unchanged throughout the entire simulation study. The same goal can also be pursued on the basis of elementary networks.
  • For convenience, it is assumed that each elementary network is partitioned into two smaller ones as

  • Nζ=Nζ 1∪Nζ 2  (3.143)
  • where the running subscript is given as ζ=L, C, A. The network identification in (3.143) could be performed with the objectives given in the following subsections.
    Identification of Networks with Variable Parameters
  • Some of the branch parameters of Nζ may depend on time as well as on applied currents and voltages. In this case, the equations corresponding to Nζ become nonlinear with time-varying coefficients. Therefore, it is desirable to partition the network as in (3.143) such that one of the networks, say Nζ 1, takes all branches with nonlinear and/or time varying parameters. It may be necessary to include other branches in order to make Nζ 1 a proper network. After performing such network partitioning, it may become possible to assemble two systems of equations for Nζ 1 and Nζ 2, respectively, such that each of them posses smaller dimensions. Then, the equations corresponding to Nζ 2 can be assembled once per topology, and the nonlinear equations with time-varying parameters corresponding to Nζ 1 would be easier to deal with due to their reduction in size. In order to perform partitioning of Nζ into Nζ 1 and Nζ 2 such that their interconnection can be conveniently handled using dimensionality reduction procedures, it is necessary only to assign appropriate weights to the network branches as in (3.119).
  • Identification of Switched Networks
  • For large switched electrical networks, the automated procedure of equation generation becomes quite expensive. As a result of this computational expense, the models of systems with fast switching, such as models that include PWM, hysteresis current inverters, converters, etc., tend to require significantly longer CPU time to run the simulation. However, for a given network N=(G, P, S), it is likely that not all of its structural sections participate in the switching transients throughout the history of topological changes. One might, therefore, attempt to identify system sections with constant topology and group them into separate networks. This idea can also be applied to each elementary network of N. Therefore, with respect to (3.143), an elementary network Nζ can be partitioned such that either Nζ 1 or Nζ 2 posses a system of equations that need not be reassembled for each new topology. Thereafter, an attempt is made to exclude all such networks from the equation assembling procedures that are performed at each switching instance. Separating the global network N into its switching and non-switching parts would not only reduce the total amount of computations required per change in topology, but also provide a means for the local averaging of state equations for the switched subnetwork as is known in the art.
  • Network Partitioning for Stability Estimation
  • The structure of the DAEs produced by ASMG may also be utilized for the non-impedance-based stability analysis of energy conversion systems represented by their equivalent circuits. That is, instead of relying on the linearization of the state equations and using Nyquist-type criteria, it is possible to generate the DAEs in a form suitable for the Lyapunov analysis. For instance, if needed, a change of variables could be used to rewrite the state equations in the following autonomous form.
  • x L t = f L ( x L ) + g L ( x L , x C ) ( 3.144 )
  • where gL(xL, xC) and gC(xL, xC) are the interconnection terms that also include the algebraic network. The stability of (3.144)-(3.145) can be analyzed as similar to a problem of perturbed motion and studied using Lyapunov functions. However, for this technique, it is necessary to establish Lyapunov functions for each system in (3.144)-(3.145) ignoring the interconnections to begin with.
  • Alternatively, since it may be feasible to assemble a separate state equation for the part of the network N which is time invariant and linear, it may be advantageous to formulate the problem of stability analysis into a robust control framework. For instance, if it is possible to collect all nonlinear elements into a separate algebraic network with no current and/or voltage sources, then the terms that are due to NA can be viewed as a perturbation to the rest of the system.
  • The underlying power of the robust control framework is that it not only enables analysis of the stability of nonlinear systems in certain regions of the state space (attractors, invariant subspaces, etc.) but it also makes possible the design of controllers that ensure some desirable properties. In particular, considering quadratic Lyapunov functions, many analysis and design control problems associated with the model of N can be formulated as linear matrix inequalities (LMIs) that can be solved numerically. In this regard, it can be very advantageous to automatically assemble state and output equations for the global network N directly in a form permitting a linear differential inclusion (LDI) of a particular type.
  • Realization of the Global Network
  • In this section, a global network N=(G, P) with augmented topology is considered further. As shown in the previous chapter, a network of this kind, in general, possesses an interconnection of the form N=NL∪NC∪NA. Such a network representation allows the corresponding systems of DAEs (3.72)-(3.75) to be assembled in a consistent manner for each elementary network with respective interconnections. For compactness, (3.72)-(3.75) can be written as
  • x LC t = f ( x LC , y i υ A , u ej , t ) ( 4.1 ) g ( x LC , y i υ A , u ej , t ) = 0 ( 4.2 )
  • where the global state vector xLC=[xL, xC] consists of minimal state vectors corresponding to the inductive and capacitive networks, yiv A=[ibr A,vbr A] is a vector of branch currents and voltages corresponding to the algebraic network, and uej=[ebr,jbr] is an input vector that is composed of input voltage and current sources.
  • System (4.1) is a first-order system of ODEs, whereas (4.2) is a system of AEs. The AE (4.2) has the form (3.23), (3.25), (3.39), (3.41) and (3.74)-(3.75). If the elements of NA are some known, possibly nonlinear functions of applied currents and voltages, then (4.2) may not have a unique solution. In general, (4.2) may be solved using iterative methods for solving systems of nonlinear equations.
  • Global Network with Linear Parameters
  • Here, it is assumed that the network parameters are possibly time-varying, but do not depend on branch currents or voltages. Then, assuming that (3.75) is well-posed, the linear AE (4.2) always has a unique solution. Moreover, it is possible to solve (4.2) explicitly for the yiv A. A After doing so, (4.1) and (4.2) can be merged into one system of first order ODEs. Thereafter, a system of DAEs describing the global network N can be mapped into a state space realization in its standard form
  • x LC t = A ( t ) x LC + B ( t ) u ej ( 4.3 ) y i υ = C ( t ) x LC + D ( t ) u ej ( 4.4 )
  • where the output vector is defined as yiv=[ibr,vbr].
  • Before the state equations (3.72)-(3.73) can actually be written in the form of (4.3), some work with AEs (3.74)-(3.75) is needed. For simplicity of derivation, it is assumed that the state variables for the NL and NC are chosen to be independent currents and voltages as defined in (3.63) and (3.64), respectively. Then, defining matrix MA as
  • M A = [ A ^ a A B ^ b A R ^ br A ] ( 4.5 )
  • which is assumed to have full rank, (3.75) can be rewritten as
  • i ^ br A = M A - 1 [ - C LA 0 ] i x + M A - 1 [ 0 - D CA ] v y + M A - 1 [ 0 - B ^ b A ] e ^ br A + M A - 1 [ 0 - B ^ b A R ^ br A ] j ^ br A = D I A i x + D V A v y + D e A e ^ br A + D j A j ^ br A ( 4.6 )
  • The expression for the branch voltages (3.74) for NA can be rewritten as
  • v ^ br A = R ^ br A M A - 1 [ - C LA 0 ] i x + R ^ br A M A - 1 [ 0 - D CA ] v y + ( I + R ^ br A M A - 1 [ 0 - B ^ b A ] ) e ^ br A + R ^ br A ( I + M A - 1 [ 0 - B ^ b A R ^ br A ] ) j ^ br A = C I A i x + C V A v y + C e A e ^ br A + C J A j ^ br A ( 4.7 )
  • Substituting (4.6) into (3.72), and (4.7) into (3.73), and collecting terms, the state equations for NL and NC become
  • i x t = - L x - 1 ( R x + L x t + C b LA C I A ) i x - L x - 1 ( C LC + C b LA C V A ) v y - L x - 1 ( B b L + C b LA C e A ) e br LA - L x - 1 C b LA C J A j br A = A L LA i x + A L CA v y + B L LA e br LA + B L A j br A ( 4.8 ) v y t = - C y - 1 ( G y + C y t - D a CA D υ A ) v y - C y - 1 ( D LC + D a CA D I A ) i x + C y - 1 ( A a C + D a CA D J A ) j br CA - C y - 1 D a CA D e A e br A = A C CA v y + A C LA i x + B C CA j br CA + B C A e br A ( 4.9 )
  • Finally (4.8) and (4.9) can be assembled to form a state equation in its standard form
  • t [ i x v y ] = [ A L LA A L CA A C LA A C CA ] [ i x v y ] + [ B L LA B L A B C A B C CA ] [ e br j br ] ( 4.10 )
  • The stage is set for assembling (4.4). Expressions (4.6) and (4.7) already provide currents and voltages for the branches of NA. Thus, it is necessary to express currents and voltages for the branches of the two remaining networks, NL and NC. Starting with NL a vector of corresponding branch currents is determined from independent currents as

  • i br L=(B b L)T i x  (4.11)
  • A vector of branch voltages for the NL can be written as
  • v br L = R br i br L + L br i br L t + L br t i br L + e br L = ( R br + L br t ) i br L + L br i br L t + e br L ( 4.12 )
  • Substituting (4.11) into (4.12), an expression for branch voltages in terms of independent currents can be obtain as
  • v br L = ( R br + L br t ) ( B b L ) T i x + L br ( B b L ) T i x t + e br L ( 4.13 )
  • After substituting state equation (4.8) into (4.13) and collecting terms, the final expression for branch voltages becomes
  • v br L = ( ( R br + L br t ) ( B b L ) T - L br ( B b L ) T L x - 1 ( R x + L x t + C b LA C I A ) ) i x - ( L br ( B b L ) T L x - 1 ( C LC + C b LA C V A ) ) v y - ( L br ( B b L ) T L x - 1 ( B b L + C b LA C V A ) - I L ) e br LA - ( L br ( B b L ) T L x - 1 C b LA C j A ) j br A = C L LA i x + C L CA v y + D L LA e br + D L A j br ( 4.14 )
  • where IL is an identity-like matrix with ones in diagonal entries corresponding to branches in NL and zeros elsewhere.
  • The equations for a, capacitive network are assembled in a similar way. That is, recalling (3.64), a vector of corresponding branch voltages is determined as

  • v br C(A a C)T v y  (4.15)
  • A vector of branch currents for the NC can be found as
  • i br C = ( G br + C br t ) v br C + C br v br C t - j br C ( 4.16 )
  • Combining (4.15) and (4.16) yields
  • i br C = ( G br + C br t ) ( A a C ) T v y + C br ( A a C ) T v y t - j br C ( 4.17 )
  • Similar to (4.14), (4.9) can be substituted for the state derivative in (4.17). Collecting corresponding terms, the following result is obtained
  • i br C = ( ( G br + C br t ) ( A a C ) T - C br ( A a C ) T C y - 1 ( G y + C y t - D a CA D V A ) ) v y + ( C br ( A a C ) T C y - 1 ( D LC + D a CA D I A ) ) i x + ( C br ( A a C ) T C y - 1 ( A a C + D a CA D j A ) - I C ) j br CA + ( C br ( A a C ) T C y - 1 D a CA D e A ) e br A = C C CA v y + C C LA i x + D C CA j br CA + D C A e br A ( 4.18 )
  • where IC is also an identity-like matrix with ones in diagonal entries corresponding to branches in NC and zeros elsewhere.
  • Assuming that vectors of currents and voltages for all branches of the global network N can be assembled (concatenated) from the corresponding currents and voltages of NL, NC, and NA as

  • i br =i br L +i br C +i br A  (4.19)

  • v br =v br L +v br C +v br A  (4.20)
  • the expressions (4.14) and (4.18) are put together to form (4.4) which becomes
  • [ i br v br ] = [ ( B b L ) T + C C LA + D I A C C CA + D V A C C LA + C I A ( A a C ) T + C L CA + C V A ] [ i x v y ] + [ D C A + D e A D C CA + D J A D L LA + C e A D L A + D j A ] [ e br j br ] ( 4.21 )
  • Thus, (4.10) and (4.21) form an automatically assembled minimal realization for the global network N.
    Global Network with Nonlinear Parameters
  • Heretofore, only networks with linear time-varying parameters were considered. Electrical networks with such parameters are linear in the sense that it is possible to apply superposition with respect to state variables. Such networks can be successfully modeled using topological algorithms and dimensionality reduction procedures and that, if needed, the resulting system of DAEs can be mapped into a standard form state space realization. In general, network parameters may not only depend on time but also be some functions of state variables such as inductor currents and fluxes, and capacitor voltages and charges. For instance, a branch resistance may depend on the current flowing through or the voltage applied to the branch. Also, if nonlinear magnetic properties are to be included in the model, the corresponding branch inductances become state dependent. This phenomena is caused by the saturation of the magnetic materials used in inductors. In modeling semiconductor power electronic devices, the effective capacitance of the junction may also depend on voltage or current depending upon the type of device considered. Therefore, it should be considered that

  • R br =R br(t,x L)  (4.22)

  • G br =G br(t,x C)  (4.23)

  • L br =L br(t,x L)  (4.24)

  • C br =C br(t,x C)  (4.25)
  • As before, a system of DAEs for each elementary network and their interconnection will be derived ignoring inactive branches for the sake of convenience. First, the system of AEs (3.75) for the network NA is rewritten as
  • [ A ^ a A B ^ b A R ^ br A ] i ^ br A - [ - C LA 0 0 - D CA ] [ i x v y ] - [ 0 0 - B ^ b A - B ^ b A R ^ br A ] [ e ^ br A j ^ br A ] = 0 ( 4.26 )
  • which is a system of nonlinear equations that can be solved iteratively for îbr. After doing so, the vector of corresponding branch voltages, {circumflex over (v)}br is computed using (3.74).
  • To assemble DAEs for the inductive network, the voltage equation for branches of this network may be expressed in its most general form
  • v br L = R br L i br L + λ br t + e br L ( 4.27 )
  • Taking into account other networks KVL (3.70) can be written as

  • B b L v br LCA=0  (4.28)
  • where the symbol ΣCA represents all terms that are due to interconnection of NL with capacitive and algebraic networks.
  • Working with nonlinear inductances, (4.24) represents an inconvenience in that it becomes necessary to compute the total time derivative of the matrix Lx which, in turn, contains partial derivatives of Lx with respect to the vector ix. A partial derivative of a matrix with respect to vector is a “large” and cumbersome object. However, the nonlinear relationship between fluxes and currents must be incorporated into the system of DAEs for the network NL. Therefore, instead of (4.24), the relationship between fluxes and currents can be modeled using a saturation function that would “penalize” the flux linkages at high currents. Using this technique of representing magnetic saturation, the branch fluxes are expressed as

  • λbr =L br i br L−φ br(i br L)  (4.29)
  • where φbr, is a vector-valued saturation function. From this point, (4.29) is substituted into (4.27) and the dimensionality reduction procedure using KVL (4.28) is applied to the resulting voltage equation. In doing so, the vector of independent fluxes is found as follows

  • λx =L x i x −B b Lφbr[(B b L)T i x ]=L x i x−φx(i x)  (4.30)
  • where, by analogy to all other reduced quantities, φx(ix) is the reduced saturation function.
  • A direct result of the dimensionality reduction procedure is a state equation with independent fluxes as state variables. For consistency, this equation is written as
  • λ x t = - R x i x - B b L e br L - Σ CA ( 4.31 )
  • Equations (4.30)-(4.31) form a system of DAEs for the network NL. Here, it is necessary to solve the nonlinear AE (4.30) for ix at each integration step, or more precisely, at each call to the derivative function (4.31). In general, iterative methods would be used for solving (4.30) for the vector of currents. For small integration steps, Newton's method with an initial guess taken as ix from the previous integration step could yield fast convergence. It may also be possible to re-parameterize the saturation function φx(ix) into φ′xx). If this is achieved, then (4.30) would become

  • L x i xx+φ′xx)  (4.32)
  • which is a linear equation with respect to ix.
  • Instead of re-parameterization of the saturation function φx(ix), ix can be chosen to be the state vector in order to avoid having to solve a system of nonlinear equations. Using the chain rule, the time derivative of fluxes in (4.30) can be expressed as
  • λ x t = λ x i x i x t + λ x t = ( L x - ϕ x i x ) i x t + L x t i x ( 4.33 )
  • Combining (4.30) and (4.33), the state equation for the network NL with currents as the state variables can be written in implicit form as
  • ( L x - ϕ x i x ) i x t = - ( R x + L x t ) i x - B b L e br L - Σ CA ( 4.34 )
  • The vector of branch currents is computed as usual using (4.11), and (4.28) can be used to compute branch voltages. Applying the chain rule to (4.30) and utilizing (4.11), the derivative of branch fluxes can be expressed as
  • λ br t = ( L br - ϕ br i br ) ( B b L ) T ( i x t ) + L br t ( B b L ) T i x ( 4.35 )
  • Combining (4.28) and (4.35), the output equation becomes
  • v br L = ( L br - ϕ br i br ) ( B b L ) T ( i x t ) + ( R br L + L br t ) ( B b L ) T i x + e br L ( 4.36 )
  • In (4.36), there is a term proportional to the derivative of the state vector. If (4.35) can be expressed in explicit form, it could be substituted into (4.36). However, since the derivative of the state vector is computed anyway, this term can be present in (4.36) without any computational disadvantage.
  • A similar system of DAEs can also be assembled for capacitive networks. The current equation for branches of NC is
  • i br C = G br C v br C + q br t - j br C ( 4.37 )
  • and KCL (3.67) can be written here as

  • A a C i br CLA=0  (4.38)
  • For the capacitive network, the nonlinear relation between capacitor charges and voltages can be represented in a form similar to magnetic saturation. That is

  • q br =C br v br C−φbr(v br C)  (4.39)
  • where φbr is also a vector-valued function representing a part of capacitance that depends on the voltage. Using the dimensionality reduction procedure with (4.15) and (4.38), a vector of independent charges can be found as

  • q y =C y v y −A a Cφbr((A a C)T v y)=C y v y−φy(v y)  (4.40)
  • Using the chain rule and (4.15), the derivatives of (4.39)-(4.40) with respect to time are found as
  • q br t = ( C br - φ br v br ) ( A a C ) T ( v y t ) + C br t ( A a C ) T v y ( 4.41 ) q y t = ( C y - φ y v y ) v y t + C y t v y ( 4.42 )
  • Finally, following the procedure, the state equation for NC can also be obtained in implicit form as
  • ( C y - φ y v y ) v y t = - ( C y + C y t ) v y + A a C j br C - Σ LA ( 4.43 )
  • The vector of branch currents is computed based on (4.37) and (4.41) as
  • i br C = ( C br - φ br v br ) ( A a C ) T ( v y t ) + ( G br C + C br t ) ( A a C ) T v y - j br C ( 4.44 )
  • The equations developed thus far can be written more compactly. For the algebraic network, (4.26) and (3.74) have the following form

  • f A(î br A , i x , v y br A br A ,t)=0  (4.45)

  • {circumflex over (v)} br A =q A(î br A ,i x ,v y br A br A ,t)  (4.46)
  • The two state equations (4.34) and (4.43) are rewritten as
  • M L ( t ) i x t = f L ( i x , v y , v br A , e br L , t ) ( 4.47 ) M C ( t ) v x t = f C ( i x , v y , j br A , j br C , t ) ( 4.48 )
  • where ML(t) and MC(t) are so-called mass matrices that can dependent on time and state.
  • Combined, (4.11) and (4.38) are written compactly as
  • y br L = g L ( i x t , i x , e br L , t ) ( 4.49 )
  • Similarly, (4.44) together with (4.15) are expressed as
  • y br C = g C ( v y t , v y , j br C , t ) ( 4.50 )
  • where ybr L=[ibr L,vbr] and ybr C=[ibr C,vbr C] are the output vectors.
  • Equations (4.45)-(4.50) form a system of DAEs which describe the global network N. Here, (4.45) is a system of nonlinear equations that could be solved numerically. Then, (4.46) and (4.49)-(4.50) are explicit systems of AEs that are not expensive to evaluate. The implicit systems of DEs (4.47)-(4.48) may be solved for the respective derivatives using some efficient techniques developed for systems of linear equations. In such arrangements, a linear solver would be called at each call to the corresponding derivative function. Alternatively, there are efficient numerical techniques that have been designed to accommodate time dependent mass matrices.
  • Switched Electrical Networks
  • A network of a general kind was previously defined as N=(G, P, s), where s is some given topology. If N is a switched network, the vector s is different for each new topology. In order for the constructed model to be meaningful, N should be a finite electrical network for each encountered topology. Throughout the simulation process, the network N may switch sequentially from an initial topology s0 to the final topology s, say as

  • s0, s1, . . . , si, . . . , sj, . . . , sm  (5.1)
  • Based on this sequence, a so-called topology matrix can be formed as

  • S=[s0, s1, . . . , si, . . . , sj, . . . , sm]  (5.2)
  • For each distinct topology of (5.1), there is a minimal realization for the inductive and capacitive networks present in N. In general, these minimal realizations may have a different number of states for each topology. That is, for a given topology si of the sequence (5.1), the state vectors of NL and NC belong to some appropriate vector spaces given topology si of the sequence (5.1), the state vectors of NL and NC belong to some appropriate vector spaces

  • xL i
    Figure US20090012770A1-20090108-P00002
    α i   (5.3)

  • xC i
    Figure US20090012770A1-20090108-P00002
    β i   (5.4)
  • where the dimensions of the minimal state vectors for the networks NL and NC are denoted with respect to (5.1) as

  • α0, α1, . . . , αi, . . . , αj, . . . , αm  (5.5)

  • β0, β1, . . . , βi, . . . , βj, . . . , βm  (5.6)
  • respectively.
  • It is also expected that in the topological sequence (5.1), there exists an si and Sj in which the number of state variables is maximum for the realizations of NL and NC, respectively. Of course, there may be several distinct topologies that require the same maximum number of states for their respective realizations. Assuming that it is always possible to obtain a non-minimal realization of a desired size by including some redundant states, it is possible to define the maximum numbers of state variables required to implement networks NL and NC as

  • α=max{α0, α1, . . . , ai, . . . , αj, . . . , αm}  (5.7)

  • β=max{β0, β1, . . . , βi, . . . , βj, . . . , βm}  (5.8)
  • respectively.
  • Furthermore, since for each single topology si the network N=(G, P, si) must be finite, the same is required for the switched network N=(G, P, S). Symbolically this requirement can be rewritten as

  • N=(G,P,S)∈N q α+β  (5.9)
  • where α and β are defined in (5.7) and (5.8), and α+β≦q. Sometimes α and β are referred to as the network complexity. Also, since S is a topology matrix, N=(G, P, S) is equivalent to a family of networks.
  • The sequence (5.1) may contain repeated topologies, and the actual number of distinct topologies may be small relative to the length m of the complete sequence of topologies throughout the entire simulation history. In this case, a reduced topology matrix that includes only the distinct topological vectors may be defined. Without loss of generality, it can be assumed that in the sequence s0, s1, . . . , si, . . . sj, . . . sr, . . . , sm for some r≦m, the first r+1 vectors are distinct. Thereafter, a reduced topology matrix is defined as

  • Sr=[s0, s1, . . . , sr]  (5.10)
  • With respect to the network N, it is possible to say that Sr spans all encountered allowable topologies.
    Calculation of Initial Conditions after Switching Event
  • Here, the two types of switched networks, namely inductive and capacitive are considered. For this case, it is required that the currents through inductive branches and the voltages across capacitive branches are continuous as the network undergoes topological changes. Then, the continuity conditions for inductive and capacitive networks can be written as

  • i i L =i i+1 L =i br L, and ∥i br L<∞  (5.11)

  • v i C =v i+1 C =v br C, and ∥v br C<∞  (5.12)
  • In other words, the vectors (or more precisely trajectories) ibr L and vbr C must be bounded and continuous across topological boundaries. Recalling how ibr L and vbr C are related to the vectors of independent inductor currents and capacitor voltages, (5.11)-(5.12) can also be rewritten as

  • (B i L)T i x i=(B i+1 L)T i x i+1 =i br L  (5.13)

  • (A i C)T v y i=(A i+1 C)T v y i+1 =v br C  (5.14)
  • The KVL and KCL matrices corresponding to the second topology si+1 can also be expressed in terms of the branch order in the respective TCF as

  • B i+1 L=└−(Â i+1 L)T I i+1 L0┘(T L i+1)T  (5.15)

  • A i+1 L =I i+1 C Â i+1 L0┘(T C i+1)T  (5.16)
  • where TL i+1 and TC i+1 are appropriate permutation matrices. Based on the structure of (5.15)-(5.16), it is possible to define the following right pseudo inverses

  • (B i+1 L)+ =T L i+1[0I i+1 L0]T=(B i+1 base)T  (5.17)

  • (A i+1 L)+ =T C i+1 [I i+1 C00]T=(A i+1 base)T  (5.18)
  • It can be noted that Bi+1 base and Ai+1 base are full-rank matrices containing only columns that make the basis in either case. Therefore, (5.17)-(5.18) can be used to compute the corresponding initial conditions for the new topology. In particular, the initial values of the independent currents and voltages for the topology si+1, can be computed as

  • i x i+1 =B i+1 base i i L  (5.19)

  • v y i+1 =A i+1 base v i C  (5.20)
  • That is, based on branch currents and voltages just before commutation, it is possible to compute initial values for the state variables for the next topology such that the usual continuity conditions are satisfied.
  • In the present embodiment, the values in the system are constrained to represent finite currents and voltages. Thus, in resistive networks,

  • i i A<∞, and ∥i i+1 A∥ <∞  (5.24)

  • v i A<∞, and ∥v i+1 A<∞  (5.25)
  • In networks with state variables, this constraint may be expressed as

  • i i L =i i+1 L =i br L, and ∥i br L<∞  (5.26)

  • v i C =v i+1 C =v br C, and ∥v br C<∞  (5.27)
  • Non-Minimal Realization
  • Thus far, all state equations for the networks NL and NC were assembled algorithmically based on topological conditions using the minimal number of state variables required to represent the dynamics of each network. That is, for each individual topology, the system of DAEs constitutes a minimal state-space realization. Since the topology of N=(G, P, s) is changing, it is reasonable to expect that the dimensions of the minimal state vectors may also change. In general, the size of the minimal realization for each network NL or NC may vary significantly throughout the sequence of topologies s0, s1, . . . , si, . . . , sm. On the other hand, for analytical and/or numerical reasons, it may be convenient to have matrices and vectors that do not change their sizes dynamically from one topology to the next. For this reason, in a computer implementation of the ASMG, matrices Aa C and Bb L can be defined so that the dimensions do not change throughout the entire simulation of N=(G, P, S). Whenever needed, the appropriate number of zero rows can be appended to the bottom in order to maintain the same matrix size. That is, Aa C and Bb L can be modified as
  • A ~ a C = [ A a C 0 ] ( 5.36 ) B ~ b L = [ B b L 0 ] ( 5.37 )
  • The total number of rows for Aa C and for Bb L is then determined by the maximum number of state variables (5.7)-(5.8) needed to represent networks NL and NC for any topology in the sequence s0, s1, . . . , si, . . . , sm. Furthermore, Ãa C and for {tilde over (B)}b L using in the dimensionality reduction procedures for NL and NC, respectively, the reduced matrices Lx and Cy will be block-diagonal, with full-rank upper-left blocks and zeros elsewhere. Therefore, having such a structure, these matrices are also block-invertible. Moreover, the same block-diagonal structure is preserved under any non-singular coordinate transformation applied to the state variables in (2.106) or (2.111).
  • Besides having the same state-space dimensions for all encountered topologies, it is also possible to make some specific state variables redundant. For a network Nζ with state variables and constant parameters, a minimal state-space realization can be assembled in its standard form
  • x ζ t = A ζ x ζ + B ζ u y iv = C ζ x ζ + D ζ u ( 5.38 )
  • where ζ=L, C, meaning that the network Nζ may be inductive or capacitive.
  • A coordinate transformation matrix can be defined as
  • K ζ = [ I M ] ( 5.39 )
  • and the new vector of state variables as

  • {tilde over (x)}ζ=Kζxζ  (5.40)
  • where M can be any real matrix of appropriate dimensions. Note that Mxζ is an additional vector of redundant states. That is, any element of this vector is nothing more than some linear combination of the state variables in xζ. Then, defining the right pseudo inverse of Kζ to be of the form

  • Kζ =[I0]  (5.41)
  • (5.38) can be rewritten in terms of the new states as
  • x ~ t ζ = K ζ A ζ K ζ x ~ ζ + K ζ B ζ u = A ~ ζ x ~ ζ + B ~ ζ u y iv = C ζ K ζ x ~ ζ + D ζ = C ~ ζ x ~ ζ + D ~ ζ u ( 5.42 )
  • This transformation of variables can be carried through it, instead of (5.38), the more general system of DAEs (4.10), (4.21), and (4.45)-(4.50) is considered.
  • Minimal Common Realization
  • Herein, a switched electrical network N=(G, P, S)∈Nq α+β with the sequence of topologies given in (5.1) is considered. It is assumed that N is switching between two distinct generic topologies, si and si+1. As usual, for any single topology s, the global network N=(G, P, s) possesses an interconnection of elementary networks of the form N=NL∪NC∪NA. If inactive branches are included in their respective elementary networks, the commutation conditions (5.24)-(5.27) must also be satisfied with respect to some fixed branch order. In particular, for networks with state variables, the continuity of inductor currents and capacitor voltages across topological boundaries results in continuity of respective branch currents and voltages as expressed in (5.11)-(5.12).
  • In addition to the switching of the branch currents and voltages, it is also interesting to consider the commutation of state variables between topologies. For this purpose, a network with state variables Nζ, where ζ=L, C, is considered. If Nζis modeled using its respective minimal state equation, then it is reasonable to expect that the state vector xζ will also change its dimension from one topology to the next. Specifically, based on (5.3)-(5.4) and with respect to si and si+1, the state vector xζ can be related to the corresponding vector space as

  • xζ i
    Figure US20090012770A1-20090108-P00002
    γ i   (5.43)

  • xζ i+1
    Figure US20090012770A1-20090108-P00002
    γ i+1   (5.44)
  • where the index γii, βi denotes the size of the minimal state vector for inductive or capacitive network as given in (5.5)-(5.6) with respect to the sequence (5.1). Naturally, since the topologies si and si+1, are distinct, with regard to (5.43)-(5.44), it is reasonable to expect that

  • γi≠γi+1  (5.45)

  • and

  • x ζ i ≠x ζ i+1  (5.46)
  • from which it follows that for the simulation process of switched network Nζ, the state variables are discontinuous. Note that (5.46) is likely to be the case not only because of (5.45), but also due to the fact that the natural state variables (independent currents, fluxes, voltages, and charges) are selected algorithmically based only on the topological information for the active portion of the network. That is, the state vector xζ i is assembled based only on N=(G, P, si). Similarly, the state variables in xζ i+1 are selected based on knowledge of N=(G, P, si+1). Therefore, even though the dimensions of xζ i and xζ i+1 may happen to be the same, in general, the state variables will not be continuous across topological boundaries.
  • As shown in the preceding section, it is always possible to obtain a non-minimal realization for each topology of the sequence (5.1) such that instead of (5.43)-(5.44),

  • xζ i, xζ i+1
    Figure US20090012770A1-20090108-P00002
      (5.47)
  • where “γ” α, β denotes the maximum required size for the state vectors as defined in (5.7)-(5.8). Realizations of the maximum required size for which (5.47) holds can be obtained by appropriately applying the transformation of variables (5.39)-(5.41) for each new topology of the sequence (5.1). However, in general, condition (5.47) does not guarantee continuity of state variable across the topological boundaries, and {tilde over (x)}ζ i∉C.
  • In addition to (5.47), it is desirable to replace (5.46) with an expression of the form

  • {tilde over (x)} ζ i ={tilde over (x)} ζ i+1  (5.48)
  • If the transformation of state variables is performed in such a way that (5.39)-(5.41) holds for every switching event in the sequence (5.1), the resulting model would exhibit global state-space continuity, xζ∈C. The ASMG with the state scheduling algorithm developed in this thesis is capable of mapping each network incidence N=(G, P, si) into a system of DAEs. However, there are many numerical as well as analytical reasons that make the property xζ∈C very desirable.
  • Without condition (5.48), the simulation of the network N=(G, P, S) is essentially a concatenation of solutions of m+1 initial value problems (IVPs) corresponding to the time intervals between topologies (5.1). Therefore, the simulation would require re-computing the initial conditions and restarting the integration routine for each new topology. In the case of continuous state variables, the need for re-initializing the integration routine would disappear.
  • If conditions (5.47)-(5.48) are satisfied for all switching instances of (5.1), the corresponding systems of DAEs for different topologies also become compatible. That is, having state continuity xζ∈C, it is possible to apply averaging techniques known in the art to the state equations obtained for distinct topologies. Another important advantage of working with state-space realizations for which (5.47)-(5.48) are enforced, is that it becomes possible to apply Lyapunov-based stability analysis for polytopic systems. Therefore, the development of a deterministic state selection algorithm subject to the continuity constraints (5.47)-(5.48) should be considered.
  • The first step in deriving the state selection algorithm subject to {tilde over (x)}ζ∈C is to define the change of variables (5.39)-(5.41) for two adjacent generic topologies si and si+1, and enforce (5.47)-(5.48). To achieve this, it is necessary to relate state vectors xζ i and xζ i+1 through some matrices such that

  • x ζ i=i T ζ i+1 x ζ i+1  (5.49)

  • x ζ i+1=i+1 T ζ i x ζ i  (5.50)
  • where iTζ i+1 and i+1Tζ i are right and left pseudo inverses of each other, respectively.
  • As done above, it is assumed that all branches, active as well as inactive, are assigned to their respective networks and that this particular branch order is fixed for both topologies. Also, for convenience of derivation, the state variables are selected to be inductor currents and capacitor voltages; whereas the dimensions of respective state vectors are kept constant by utilizing (5.36)-(5.42). Then the state variables for the instance of commutation between two topologies can be related as

  • i x i+1 =B i+1 base i i L =B i+1 base(B i L)T i x i=i+1 T L i i x i  (5.51)

  • v y i+1 =A i+1 base v i C =A i+1 base(A i C)T v y i=i+1 T C i v y i  (5.52)
  • If the appropriate redundant states are included in ix i, ix i+1 and vy i, vy i+1, then the transformation matrices i+1TL i and i+1TC i, under conditions of any proper commutation, can always be made to have full rank. Thereafter, iTζ i+1 and i+1Tζ i become inverses of each other.
  • The second step in deriving an algorithm for global state space continuity consists of applying results of the first step recursively to all topologies of the sequence (5.1). During the time interval of topology si, network N=(G, P, si) is modeled using 4 transformation of variables of the form

  • {tilde over (x)}ζ=0Kζ ixζ i  (5.53)
  • where for ζ=L, C; xζ i denotes ix i or vy i, and x ζ is the respective state vector transformed by 0Kζ i such that state variables are continuous starting from initial topology so all the way up to topology si. This step represents an inductive assumption. Then, in order to keep continuity of {tilde over (x)}ζ during the switching from si to si+1, based on (5.49), the following relation must be satisfied

  • {tilde over (x)} ζ=0 K ζ ii T ζ i+1 x ζ i+1  (5.54)
  • From (5.54), it follows that the transformation of variables for the topology si+1 is in fact
  • x _ ζ = K ζ i + 1 0 x ζ i + 1 where ( 5.55 ) K ζ i + 1 0 = n = 0 n = i T ζ n + 1 n ( 5.56 )
  • Based on (5.50), and very similar to (5.54), it is clear that the inverse of 0Kζ i+1 can also be found as
  • K ζ 0 i + 1 = T ζ i i + 1 K ζ 0 i = n = 0 n = i T ζ i - n i + 1 - n ( 5.57 )
  • The transformation matrices (5.56)-(5.57) can be used in (5.42) in order to achieve {tilde over (x)}ζ∈C. The same transformation of variables can be applied to the more general system of DAEs (4.10), (4.21), and (4.45)-(4.50), in order to obtain global continuity of state variables.
  • Implementation
  • The state selection method discussed herein provides a convenient structure of DAEs for the ASMG. This methodology can be readily implemented on a computer using a matrix representation of a graph corresponding to the global switched network N=(G, P, S). Thereafter, all DAEs for elementary networks can be readily assembled based on the TCF for each topology of N=(G, P, S). However, a brute-force implementation of the ASMG involving sparse topological matrices together with sparse parameter matrices would result in very long simulation run times in the case of electrical networks with time-varying and/or non-linear parameters and a large number of branches. Therefore, an efficient numerical implementation of the ASMG-generated DAEs is a very important issue that will now be addressed.
  • Instead of employing matrices for the representation of graphs in N=(G, P, S), it is possible to use a collection of linked lists or arrays with multiple indirections. Running spanning tree algorithms on the resulting, more compact data structures requires less CPU time. Utilizing this data structure, the network identification procedure produces arrays containing tree and link branches for each elementary network. Similar arrays can also be assembled by reading the indices of the appropriate rows and columns of the TCF matrix (3.48) or (3.112). Thereafter, it is possible to obtain arrays of branches forming a cycle corresponding to each link branch. Each such array represents a set of branches forming a loop headed by its respective link branch. Also, for each network branch, it is possible to assemble an array that records the loop numbers corresponding to the loops in which this branch participates. Such an array represents a loop participation set for the given branch. Similarly, it is possible to form arrays of cutset branches corresponding to each tree-branch, and arrays of cutset participations for all branches. These compact arrays of loop (loop participation) and cutsets (cutset participation) sets can be formed on a network-by-network basis. As it will be shown, assembling DAEs and computing their terms using arrays of branch sets avoids unnecessary operations and, therefore, significantly reduces the computational complexity.
  • Implementation of State and Output Equations
  • In order to reduce the computational effort associated with solving the network DAEs, it is instructive to recall those equations in matrix form. In this section, the output equations (4.13), (4.17), (4.36), and (4.44) are considered first. Then attention is turned to the state equations (3.72) and (3.73), and their more general form (4.34) and (4.43).
  • Output Equations
  • Following the sequence in which the material was presented in preceding sections, the inductive network is considered first. For convenience, the voltage equation for NL with time-varying parameters is repeated here.
  • v br L = R br L i br L + L t i br L br + L br i br L L + e br L ( 6.1 )
  • The voltage for the k-th branch of NL can be written directly from (6.1) in terms of summations as
  • v br L ( k ) = l B L R br L ( k , l ) i br L ( l ) + l B L L t br ( k , l ) i br L ( l ) + l B L L br ( k , l ) t i br L ( l ) + e br L ( k ) ( 6.2 )
  • where both indices i and k span all branches of NL. The number of calculations in (6.2) can be significantly reduced if the summations are performed including only those branches that are coupled through mutual resistances and inductances. Since the coupling among the network branches is known ahead of time, it is possible to define sets that compactly store this information. For instance, it is convenient to define Mk R as the set containing indices of all branches that are coupled by a mutual resistance with the k-th branch of NL. Also, if branch bk has a non-zero resistance, it is convenient to let Mk R contain k as well. On the other hand, if bk has zero resistance, it cannot be coupled resistively to any other branches and consequently Mk R is empty. Similarly, Mk L is defined to contain indices of inductive branches that are coupled with bk by a mutual inductance. Again, Mk L contains k only if branch bk is inductive and is not coupled with other network branches. If bk has zero inductance, the corresponding set is Mk L=Ø. Furthermore, among branches represented in Mk L there are some that have time-varying inductance. Thus, it is possible to identify a subset Mk Li⊂Mk L that includes all branches with time-varying inductance. Utilizing such sets, (6.2) can be rewritten as
  • v br L ( k ) = i M k R R br L ( k , l ) i br L ( l ) + m M k Li L t br ( k , m ) i br L ( m ) + n M k L L br ( k , n ) t i br L ( n ) + e br L ( k ) ( 6.3 )
  • Assuming that sets Mk R, Mk L, and Mk Li are assembled for each branch of NL, the vector of branch voltages vbr L can be computed using (6.3) with reduced expense compared to (6.2).
  • The same concept can be applied to the capacitive network. The output equation for the time-varying case of NC is
  • i br C = G br C v br C + C br t v br C + C br v br C t - j br C ( 6.4 )
  • Similar sets corresponding to the branches with mutual conductances Mk G and mutual capacitances Mk C and subsets of branches with time-varying capacitances Mk Ct Mk C can be assembled for the k-th branch of NC. The expression for computing the k-th component of ibr C becomes
  • i br C ( k ) = l M k C G br C ( k , l ) i br L ( l ) + m M k Ct C br t ( k , m ) v br C ( m ) + n M k C C br ( k , n ) t v br C ( n ) - j br C ( k ) ( 6.5 )
  • The sets Mk R, Mk L, Mk Li, and Mk G, Mk C, Mk Ct can be obtained for the initial topology and then updated for each new topology.
  • This technique of summing over only relevant terms can be extended to the more general form of output equations (4.36) and (4.44). In the case of inductive networks, among branches represented in Mk L, there may be some branches for which the inductance depends on network currents. The indices of these branches can be placed in a set Mk φ Similarly, among the capacitive branches represented in Mk C, there may be some branches for which the capacitance is a function of applied voltage. These branches are collected in Mk φ. Clearly, Mk φ⊂Mk L and Mk φ⊂Mk C. After obtaining these additional branch sets, components of (4.36) and (4.44) can be computed as follows
  • v br L ( k ) = ( m M k R R br L ( k , m ) i br L ( m ) ) + ( m M k L L br ( k , n ) t i br L ( n ) ) + ( i M k Lt t L br ( k , i ) i br L ( i ) ) + ( j M k ϕ i br L ( k ) ϕ br ( j ) t i br L ( j ) ) + e br L ( k ) ( 6.6 ) i br C ( k ) = ( m M k G G br L ( k , m ) v br C ( m ) ) + ( m M k C C br ( k , n ) t v br C ( n ) ) + ( i M k Ct t C br ( k , i ) v br C ( i ) ) + ( j M k φ v br C ( k ) φ br ( j ) t v br C ( j ) ) + j br C ( k ) ( 6.7 )
  • State Equations
  • For the purposes of numerical implementation, it is convenient to view the state equation, either for NL or for NC, in the following general form
  • M ( x , t ) x t = A ( x , t ) + g ( u , t ) ( 6.8 )
  • where A(x, t) represents all terms that define the state self-dynamics. The forcing term g(u, t) takes into account all external sources and interconnections with other networks. Since all inputs represented by u have the same units, the function g(u, t) will have the form of a summation. Finally, M(x, t) is the mass matrix.
  • Thus, NL with time-varying parameters is considered first. The state equation for this case can be written as
  • L x i x t = - ( R x + L x t ) i x - B b L e br L - C b LA v br A - C LC v y C ( 6.9 )
  • The forcing vector g(u, t), with respect to (6.9), can be expressed as

  • g L(u,t)=−B b L e br L −C b LA v br A −C LC v y C  (6.10)
  • Of course, computing (6.10) by the means of full matrix multiplication is not efficient. Instead, it is possible to utilize the topological information about the matrices in (6.10). In particular, recalling KVL matrix (3.50) and expression (3.62), it can be concluded that the k-th component of the forcing vector (6.10) is the sum of external (forcing) voltages taken around the k-th loop with respect to the inductive network. Then, by examining the nonzero entries of the respective KVL matrices Bb L, Cb LA, and CLC, it is possible to assemble special sets of loop branches for each term in (6.10). The set of branch indices corresponding to the k-th inductive link-branch is denoted as Lk L, which contains the k-th branch. The set Lk Le Lk L includes only those NL loop-branches that have non-zero external voltage sources. Similarly, Lk LC contains the indices of the loop-branches that happen to be in the algebraic network, and Lk LC includes the loop-branches that are in NC. Clearly, Lk=Lk L∪Lk LA∪Lk LC contains all loop-branches (complete basic loop) corresponding to the k-th inductive link-branch. These branch sets can be assembled for each new network topology. Based on these compact loop-sets, the k-th component of the forcing vector can be computed with reduced effort as follows
  • g L ( k ) = - l L k Le B b L ( k , l ) e br L ( l ) - m L k LA C b LA ( k , m ) v br A ( m ) - n L k LC C LC ( k , n ) v y C ( n ) ( 6.11 )
  • The reduced inductance and reduced resistance matrices in (6.9) are defined as a triple product of appropriate matrices. Using to the typical matrix multiplication process, the entries of these matrices are found as
  • L x ( i , j ) = m = 1 η + h n = 1 η + h B b L ( i , m ) B b L ( j , n ) L br ( m , n ) ( 6.12 ) R x ( i , j ) = m = 1 η + h n = 1 η + h B b L ( i , m ) B b L ( j , n ) R br L ( m , n ) ( 6.13 )
  • The terms under summations in (6.12) and (6.13) are non-zero only when the internal indices m and n correspond to the branches that are members of the i-th and j-th respective loops and to the non-zero entry of the respective parameter matrix. Then, using loop sets earlier denoted as LL, each (i, j)-th element of Lx and Rx can be computed with reduced computational effort as
  • L x ( i , j ) = ( m L i L ) ( n L j L ) B b L ( i , m ) B b L ( j , n ) L br ( m , n ) ( 6.14 ) R x ( i , j ) = ( m L i L ) ( n L j L ) B b L ( i , m ) B b L ( j , n ) R br L ( m , n ) ( 6.15 )
  • For networks with time-varying inductances and resistances, (6.14) and (6.15) can be updated by running the summation indices m and n only over those branches that have time-varying parameters.
  • This method of computing the reduced parameter matrices is advantageous over performing the full triple matrix multiplication. An even better performance can be obtained by using loop participation sets. It can be recalled that in the case of NL only the network loop branches that are linked by inductive links make a contribution to (6.12)-(6.13). Thus, for each k-th branch of NL, it is possible to assemble a set that contains all network loops (loop indices) in which this branch participates as determined by the local KVL matrix Bb L. For convenience, the loop numbering can be made the same as for the rows of Bb L. It is useful to define LPk as the set that stores loop numbers in which the k-th branch takes part. This set can be assembled by recording indices of the non-zero entries of the k-th column of Bb L. Thereafter, it is possible to express the contribution of mutual inductance between the m-th and n-th branches in the reduced inductance matrix as follows
  • Δ L x ( i , j ) = ( i LP m ) ( j LP n ) B b L ( i , m ) B b L ( j , n ) L br ( m , n ) ( 6.16 )
  • When k=m=n, expression (6.16) defines the contribution of the self inductance corresponding to the k-th network branch. In the same way, the contribution of a mutual resistance between the m-th and n-th branches in the reduced resistance matrix can be expressed as
  • Δ R x ( i , j ) = ( i LP m ) ( j LP n ) B b L ( i , m ) B b L ( j , n ) R br L ( m , n ) ( 6.17 )
  • Thus, using (6.16)-(6.17), the reduced parameter matrices in the state equation (6.9) can be computed and updated in a very efficient way.
  • Considering NC with time-varying parameters, the duality with respect to the previous network can be utilized. The corresponding state equation is
  • C y v y t = - ( G y + C y t ) v y + A a C j br C - D a CA i br A - D LC i x L ( 6.18 )
  • where the forcing vector can be expressed as

  • g C(u,t)=A a C j br C −D a CA i br A −D LC i x L  (6.19)
  • In this case, it is possible to utilize the topological information available in the TCF (3.48). In particular, recalling KCL expression (3.56), it can be concluded that the k-th component of the forcing vector (6.19) is the sum of currents external to the capacitive network. Moreover, the non-zero entries of the KCL matrices Aa C, Da CA, and DLC correspond to some cutset branches. The set of cutset branches corresponding to the k-th capacitive tree branch is denoted as Ck C, and the subset Ck j Ck C includes the branches that have nonzero external current sources. The sets Ck CA and Ck CL include the indices of the cutset branches that are in the algebraic and inductive networks, respectively. Clearly, Ck=Ck C∪Ck CA∪Ck cl is the complete set of cutset branches corresponding to the k-th capacitive tree-branch. As before, all branch sets are assembled for each new network topology. Thereafter, the k-th component of the forcing vector can be computed with minimized effort as
  • g C ( k ) = - l C k Ct A a C ( k , l ) j br C ( l ) - m C k CA D a CA ( k , m ) i br A ( m ) - n C k CL D LC ( k , n ) i x L ( n ) . ( 6.20 )
  • The reduced parameter matrices can be computed using cutset participation sets. For convenience, the cutset numbering can be made the same as that for the rows of Aa C. The set CPk includes the cutset indices in which the k-th capacitive branch participates. By analogy, this set can be assembled by recording indices of the non-zero entries of the k-th column of Aa C. Thereafter, the contribution of a mutual capacitance and/or mutual conductance between the m-th and n-th branches in their respective reduced matrices can be expressed as
  • Δ C x ( i , j ) = ( i CP m ) ( j CP n ) A a C ( i , m ) A a C ( j , n ) C br ( m , n ) ( 6.21 ) Δ G x ( i , j ) = ( i CP m ) ( j CP n ) A a C ( i , m ) A a C ( j , n ) G br ( m , n ) ( 6.22 )
  • Similarly, when k=m=n, (6.21)-(6.22) define contributions of the self-capacitance and self-conductance of the k-th branch of the capacitive network. Using (6.21)-(6.22), the reduced parameter matrices in (6.18) can be computed and updated efficiently.
  • Building Algorithm for Networks with Variable Parameters
  • After the appropriate branch sets have been assembled, the implementation of the inductive and capacitive networks becomes very similar. In this section, a pseudo code implementing the output and state equations is described with respect to the inductive network only. The same techniques and conclusions can be directly applied to the capacitive network.
  • To implement output equation (6.3), the branch sets earlier defined as Mk R, Mk L, and Mk Lt are used. In order to have convenient and fast access to parameters, it is assumed that all parameter matrices use full matrix storage. If standard matrix-vector multiplication is used to implement (6.2), the computational complexity is

  • η6.2(n)=Θ(3n+n)  (6.23)
  • However, since the density of matrices in (6.1) is a function the number of mutual parameters, the complexity of (6.3) is somewhat reduced as compared to Θ(3n2+n). In particular, the complexity of the pseudo code in Code Block 1 implementing (6.3) is

  • η6.3(n)=Θ[n+n s R +n s L +n s Lt+2(n m R +n m L +n m Lt)]  (6.24)
  • which, in turn, can be anywhere between Θ(4n) and Θ(3n2+n), and would still have the same upper bound on the order of growth of O(n2). In the previous expressions for complexity, n is the number of branches in the given network, ns R,ns L,ns Lt and nm R,nm L,nm Lt are the total numbers of self and mutual resistive, inductive, and time-varying inductive parameters, respectively. The values of those numbers are readily determined from the branch sets MR, ML, and MLt.
  • Code Block 1
    for (∀kεBL) { /* each k-th branch in NL */
      /* initialize memory slot with ebr L  */
      VLnw[k] = EsL[k];
      /* j-th and k-th branches have mutual R  */
      for (∀jεMk R) {
        VLnw[k] = VLnw[k] + RLnw[k] [j] * ILnw[j];
      }
      /* j-th and k-th branches have mutual L */
      for (∀jεMk L) {
        VLnw[k] = VLnw[k] + Lnw[k] [j] * dILnw[j];
      }
      /* j-th and k-th branches have mutual L(t) */
      for (∀jεMk Lt) {
        VLnw[k] = VLnw[k] + dLnw[k] [j] * ILnw[j];
      }
    }
  • Implementation of the state equation (6.8) is accomplished in parts. First, the forcing vector is computed according to (6.10). Second, depending on the type of network parameters and choice of state variables, the vector of state derivatives is calculated. A pseudo code block implementing (6.10) is shown in Code Block 2. The code is similar to that shown in Code Block 1. The computational complexity of the pseudo code in Code Block 2 is
  • η 6.10 ( n ) = Θ ( n + i = 1 n n i Le + i = 1 n n i LA + i = 1 n n i LC ) ( 6.25 )
  • which has the same upper bound of O(n2). Here, parameter n denotes the number of inductive link branches or the total number of inductive loops, and ni Le is the number of voltage sources in each i-th inductive loop. Then ni LA and ni LC are the numbers of algebraic and capacitive network branches, respectively, that are also part of i-th inductive loop. These quantities are determined by the branch sets LLe, LLA, and L LC.
  • Code Block 2
    for (k=−1...size(Bx L)) { /* each k-th inductive link/loop */
      gL[k] = 0.0; /* initialize memory slot  */
      /* j-th branch in k-th loop has voltage source  */
      for (∀jεLk Le) {
        gL[k] = gL[k] − B[k] [j] * EsL[j];
      }
      /*  j-th branch in k-th loop is also in NA */
      for (∀jεLk LA) {
        gL[k] = gL[k] − CbLA[k] [j] * VAnw[j];
      }
      /*  j-th branch in k-th loop is also in NC */
      for (∀jεLk LC) {
        gL[k] = gL[k] − C_LC[k] [j] * vy[j];
      }
    }
  • The building algorithm for assembling reduced parameter matrices consists of two parts: initialization and update. The pseudo code implementing these procedures is given in Code Blocks 3-5. Since all reduced parameter matrices can be assembled using their respective loop and loop participation (cutset and cutset participation) branch sets, the building algorithm for the other reduced matrices can be obtained by appropriately modifying the pseudo code given therein. Therefore, only the reduced inductance matrix is discussed.
  • The computational complexity as well as features of algorithms given in Code Blocks 3-5 are very different. The pseudo code in Code Block 3 can be used to assemble the reduced parameter matrices utilizing loop (cutset) sets as in (6.14). This code may be used to the initialize and update the parameter matrices. The initialization is performed by writing zeros in each memory slot, and then by computing the appropriate contribution due to the constant inductances into an auxiliary storage matrix Lx. Pre-computing the contribution of all time-invariant inductances into a separate matrix suggests this approach

  • L x(t)=L x +ΔL x(t)  (6.26)
  • Code Block 3
    for (i = 1...size(Bx L))  { /* i-th inductive loop */
      for (j = 1...size(Bx L))  { /* j-th inductive loop */
        Lx[i] [j] = 0.0;  /* initialize memory slot */
        /* m-th branch is in i-th loop  */
        for (∀mεLi L)  {
          /* n-th branch is in j-th loop */
          for (∀nεLj L)  {
            Lx[i] [j] = Lx[i] [j]
            + BbL[i] [m] * BbL[j] [n] * Lnw[m] [n];
          }
        }
      }
    }
  • Code Block 4
    /* inductance between m-th and n-th branches of NL */
    for (∀<m,n>εLN L) {
      /*  m-th branch is in i-th inductive loop */
      for (∀iεLPm) {
        /*  n-th branch is in j-th inductive loop */
        for (∀jεLPn)  {
          Lx[i] [j] = Lx[i] [j]
          + BbL[i] [m]*BbL[j] [n]*L[m] [n];
        }
      }
    }
  • Code Block 5
    /* variable L between m-th and n-th branches of NL */
    for (∀<m,n>εLN L var)  {
      /*  m-th branch is in i-th inductive loop */
      for (∀iεLPm)  {
        /*  n-th branch is in j-th inductive loop */
        for (∀jεLPn)  {
          Lx[i] [j] = Lx[i] [j] + BbL[i] [m] * BbL[j] [n]
          *(Lvar[m] [n] − Lold[m] [n]);
        }
      }
      /* record used value for the future call */
      Lold[m] [n] = Lvar[m] [n];
    }

    where ΔLx(t) is the respective contribution due to remaining variable inductances. Then the procedure for updating Lx(t) can be implemented by initializing each memory slot of Lx(t) with pre-computed values from corresponding slots in Lx. This technique, however, has the disadvantage of having to copy the entire matrix Lx at each update. In large networks where only few parameters are variable, copying the entire matrix Lx over to Lx(t) may represent significant and unnecessary overhead.
  • Utilizing the loop participation sets according to (6.16)-(6.17), the reduced parameter matrices can be assembled as shown in Code Block 4, without the disadvantage of having to copy the entire matrix for each update. Thus, the code in Code Block 4 avoids the scheme (6.26) altogether. However, if the code in Code Block 4 is used to update the reduced matrix, the original matrix will be destroyed which, in turn, makes it difficult to carry the contribution due to time-invariant parameters from one update to the next. In the algorithm shown in Code Block 3, this function is accomplished by copying the entire matrix; whereas Code Block 4 updates only the relevant entries. One way of using the algorithm in Code Block 4 for systems with both time-varying and time-invariant parameters is to “undo” the previous update before performing a new one. This two-step update procedure is equivalent to updating the matrix once with the difference between the old and the new values of the variable parameters. The pseudo code illustrating the update procedure is given in Code Block 5. Therein, the reduced inductance matrix is updated due only to changes in variable inductances. The implementation of this update algorithm requires auxiliary static memory for storing previous update values. This storage should be of the same size and type as the original reduced parameter matrix. It is also noted that the subtraction of the old and new inductances should be performed first as indicated by the parentheses so as to reduce the round-off errors due to finite precision machine arithmetic.
  • The complexity of the algorithms in Code Blocks 3-5 is a function of the size of the loop and loop participation sets. The sizes of these sets and their structures are, in turn, determined by the self parameters and the mutual coupling between network branches. Therefore, the complexity of algorithms Code Blocks 3-5 is highly system-dependent. Analyzing each algorithm for the worst case, which corresponds to all branches having variable parameters with all branches coupled with each other, results in similar complexity for all algorithms and reveals little about their actual performances with respect to practical networks. Instead, the problem of assembling the reduced parameter matrices using algorithms in Code Block 3 and Code Block 4 should be considered with respect to some typical cases. Expressions for the complexities can be significantly simplified using certain assumptions. Thus, for the purpose of derivation, it is assumed that there is no mutual coupling. Then, inspecting Code Block 3 for implementing (6.14) with this assumption in mind, the complexity can be expressed as

  • η6.14(n)=Θ[n 2( m 2+1)]  (6.27)
  • where n is the number of inductive loops, and m is the average number of branches in each loop. Denoting mi to be the number of branches in the i-th loop, (6.27) can be developed further as
  • η 6.14 ( n ) = Θ [ n 2 ( 1 n i = 1 n m i ) 2 + n 2 ] = Θ [ ( i = 1 n m i ) 2 + n 2 ] ( 6.28 )
  • With respect to (6.16), the complexity of the code in Code Block 4 is determined by the sizes of the loop participation sets LP. In particular, assuming only self inductances, the computational effort for the contribution from one branch is proportional to the square of the number of loops in which this branch participates. Denoting the number of loops in which the i-th branch participates as pi, including initialization, the expression for the complexity for the code in Code Block 4 becomes
  • η 6.16 ( n ) = Θ ( i = 1 n p i 2 + n 2 ) ( 6.29 )
  • Both (6.28) and (6.29) demonstrate significant computational savings when compared to Θ[n2(nbr 2+1)], which is the cost of triple matrix multiplication. Here, nbr denotes the total number of branches in the given network. A closer look at (6.28) and (6.29) suggests that for large networks, the advantage of the building algorithm presented here should become more noticeable.
  • The run-time computational routines in one embodiment of the present invention will now be discussed in relation to FIG. 5. ODE solver 41 maintains state vector x, which is provided to inductive links current calculator 43 (for state variables relating to inductive elements) and capacitive trees voltage calculator 45 (the portions xC represent state variables relating to capacitive elements). Inductive links current calculator 43 calculates the current ilink L in the inductive link branches of the circuit, providing that information to resistive network algebraic equation calculator 47 and inductive network state/output equation evaluator 49. Likewise, capacitive trees voltage calculator 45 calculates the voltages vtree C in capacitive tree branches as discussed above, and provides that information to resistive network algebraic equation component 47 and capacitive network state/output equation evaluator 51.
  • Resistive network algebraic equation evaluator 47 uses ilink L and vtree C with ebr A and jbr A to calculate ibr A and vbr A, which are provided to inductive network state/output equation component 49, capacitive network state/output equation component 51, event variable calculator 53, and the system output. Inductive network state/output equation component 49 uses ilink L and vbr A along with inputs ebr L and jbr L to determine ibr L, vbr L, (provided to event variable calculator 53 and the system output) and dxL/dt (provided to the ODE solver 41). Capacitive network state/output equation component 51 uses ibr A and vtree C along with ebr C and jbr C to calculate ibr C, vbr C, (provided to event variable calculator 53 and the system output) and dxC/dt (provided to the ODE solver 41).
  • The branch voltages and currents are output as vectors ibr and vbr. These values are used with ubr by event variable calculator 53 to produce event variable zxnp, which is passed to the ODE solver 41. In addition to solving the differential equations for the system, ODE solver 41 monitors for negative-to-positive zero crossings of Zxnp. If a zero crossing is encountered, which indicates that a switch or switches are opening or closing, the state selection algorithm (part of state model generator 31 in FIG. 3) if invoked to establish a new set of state variables and to update the branch sets and matrices used by the state equation building algorithm discussed above.
  • Inductive links current calculator 43 will now be discussed in relation to FIG. 6. At decision block 61, it is determined whether currents or fluxes have been used to define the state variables for the system. If currents have been used, then the state variables from xL are already in the proper form and are provided as output. Otherwise, if fluxes have been used, it is determined at decision block 63 whether constant or variable inductance parameters are present. If the parameters are constant, the output is calculated at block 67 as Lx −1xL. Alternatively, if the inductance parameters are variable, block 69 determines the output vector by solving Lxilink L=xL.
  • The analogous calculations for capacitive tree voltage calculator 45 will now be discussed in relation to FIG. 7. At decision block 71, it is determined whether voltages or charges have been used to define the state variables for the system. If voltages have been used, then the state variables from xC are already in the proper form and are provided as output. Otherwise, if charges have been used, it is determined at decision block 75 whether constant or variable capacitance parameters are present. If the parameters are constant, the output is calculated at block 77 as Cy −1xC. Alternatively, if the capacitance parameters are variable, block 79 determines the output vector by solving Cyvtree C=xC.
  • Resistive network algebraic calculation block 47 will now be discussed in more detail in relation to FIG. 8. At decision block 81, it is determined whether the resistive network parameters are constant or variable. If they are constant, the output current vector is calculated as shown in block 83. If it is determined at decision block 81 that the resistive network parameters are variable, the current vector for the algebraic network branches is determined in block 85 by solving the equation shown therein. After the current vector is determined in either block 83 or block 85, the voltage vector is determined at block 87, as shown therein.
  • The inductive network state and algebraic equation determining block 49 is shown in more detail and will now be discussed in relation to FIG. 9. First, at block 90, the inductive network branch currents ibr L and forcing term gL are computed. At decision block 91 it is determined whether the inductive network contains constant or variable parameters. If the parameters are constant, then the type of state variables being used is identified at decision block 92. For currents, the derivatives with respect to time of the inductive portion of the state vector and the currents in the inductive link branches are calculated. Similarly, for state variables defined as fluxes, those values are calculated at block 94. In either event (from either block 93 or block 94), the time-derivative of inductive branch currents and the voltages for inductive branches are determined at block 95, and the outputs are generated.
  • If it is determined at decision block 91 that variable parameters are present, then it is determined at decision block 96 whether currents or fluxes have been selected as state variables for the inductive portion of the network. If currents have been selected, then the time-derivatives of XL and the inductive link-branch currents are determined at block 97. Correspondingly, if fluxes are used as state variables, then those values are determined at block 98. In either event (from either block 97 or block 98), the time-derivative of inductive branch currents, as well as the inductive branch voltages, are determined at block 99, and the output of overall block 49 is generated.
  • Analogously to other discussion in relation to FIG. 9, the capacitive network state in algebraic equation component 51 will now be discussed in relation to FIG. 10. At block 100, values are calculated for the voltages vbr C and forcing term gC for the capacitive network. At decision block 101, it is determined whether constant or variable parameters are present. If constant parameters are being used, then it is determined at decision block 102, whether voltages or charges are used as state variables for the capacitive network. In the former case, the time-derivative of the state variables relating to the capacitive network and the voltages for the capacitive tree branches are calculated at block 103. Likewise, if charges are being used as state variables, those values are calculated at block 104. In either event (whether from block 103 or block 104), the system calculates the capacitive branch currents and the time-derivative of the capacitive branch voltages at block 105, and the output values are generated.
  • If at decision block 101 it is determined the variable parameters are present, then the type of state variables (voltages or charges) is determined at decision block 106. If voltages are being used, then the time-derivatives of the state variables relating to capacitive branches, as well as the voltages relating to capacitive tree branches, are determined at block 107. If charges are being used for state variables, then the equations of block 108 are solved. In either event (whether from block 107 or block 108), the capacitive branch currents and the time-derivative of the capacitive branch voltages are determined at block 109, and the output of overall block 51 is generated.
  • Built-in Switching Logic
  • Although, as shown in FIG. 4, the ASMG has a single switch branch, different logic may be specified by the user to determine when a given switch is opened or closed. Four built-in switch types, each implementing specific switching logic, have been considered in an exemplary embodiment of the invention. These switch types were selected to represent many common solid-state switching devices such as diodes, thyristors, transistors (MOSFET, BJT, IGBT, for example), triacs, and the like. In general, the built-in switching logic does not permit the opening of switches that would cause discontinuities of currents in inductors and/or current sources, as well as closing of switches that would cause discontinuities of capacitor voltages and/or voltage sources. Such attempts would violate Kirchhoff's current law and/or Kirchhoff's voltage law and, therefore, are not allowed. In some embodiments, violation of KCL, KVL, and/or energy conservation principles results in appropriate error messages. The four types of switches built into the system will now be discussed in relation to FIG. 11, in which the switching control signal is denoted as ubr.
  • Unlatched Bidirectional Switch (UBS)
  • The switch of this type can conduct current in either direction when the switching control signal ubr is greater than zero, and block positive or negative voltages otherwise. That is, ubr>0 implies vbr=0, while ubr≦0 implies ibr=0. The switch can be opened or closed at any instant of time by controlling variable ubr, subject to KCL, KVL, and energy conservation principles.
  • Unlatched Unidirectional Switch (UUS)
  • The switch of this type is similar to the UBS, but can conduct current only in the positive direction. That is, ubr>0 and ibr>0 implies ubr=0, while ubr≦0 implies ibr=0. In any case, ibr≧0. The switch is closed when the variable e1=min(ubr,vbr) crosses zero going negative-to-positive. The switch becomes open when the variable e2=min(ubr,ibr) crosses zero going positive-to-negative. This switch should not be used to short-circuit a loop of capacitors and/or voltage sources, unless the resulted sum of the voltages equals to zero.
  • Latched Bidirectional Switch (LBS)
  • The switch of this type is also similar to the UBS with the exception that it can be open only at the current-zero-crossing. That is, the switch is closed when ubr>0, which results in ubr=0. In this case, the event variable e1=ubr is monitored for negative-to-positive zero crossing. In order to open, the switch state controlling variable ubr has to become negative first. After that, the switch automatically opens at the next current zero crossing. This logic can be used to represent AC arcing switch or an ideal solid-state triac.
  • Latched Unidirectional Switch (LUS)
  • The switch of this type is similar to the LBS with the exception that it can conduct current only in positive direction. That is, ubr>0 and ibr>0 implies ubr=0. The switch is closed when the variable e1=min(ubr, vbr) crosses zero going negative-to-positive. The switch becomes open when the variable e2=min(ubr,ibr) crosses zero going positive-to-negative. This switch can be used to model an ideal thyristor. These four switch types can be advantageously integrated into the circuit simulation system routines. For example, switching analysis and topology evaluation for state selection can be optimized using the additional information inherent in each switch type, as will occur to those skilled in the art.
  • Many variations on the above-disclosed exemplary embodiments will be apparent to those skilled in the art. For example, the processes discussed above can be implemented on many different hardware platforms running many different operating systems. The computational portions might be implemented as an integrated portion of the overall system, or might rely on a computational platform such as ACSL (published by The Aegis Technologies Group, Inc., of Huntsville, Ala., USA), or MATLAB/Simulink (published by The MathWorks, Inc., of Natick, Mass., USA).
  • Furthermore, in various embodiments, the processes might be carried out on a single processor or be distributed among multiple processors.
  • Still further, the time-domain steps in the simulation might be the same throughout the system, or might vary between variables, for a single variable (over time), between portions of the circuit being simulated, or other divisions as would occur to one skilled in the art. Likewise, the numerical techniques used to perform integration and/or differentiation (e.g., trapezoidal, NDF, or Adams techniques), the rates and maximum error parameters can also vary and might be consistent across the system, vary among portions of the circuit, change over time, or otherwise be applied as would occur to one skilled in the art. Various embodiments of the present invention will also use different techniques to revise the state equations stored for each topology. In some cases, the data structure(s) that describe the state equations before a topology change event are modified only as much as necessary to take into account the new topology (i.e., only the changed portions of the circuit). Alternatively or additionally, new state equations may be derived in whole or in part for one or more topologies. In still other embodiments, a cache is maintained of state equations for some or all of the topologies that are encountered. A wide variety of caching strategies and techniques are available for use with the present invention, as would occur to one skilled in the art.
  • The parameters of the branches in the system can also be updated during the simulation, using a variety of strategies. In some embodiments, a data structure reflecting the constant parameters is maintained. Each time the variable parameter values are updated in the system, the constants are copied into a new data structure, and the variable parameters are added to them. In other embodiments, the parameters for the circuit at a given time ti are stored in a data structure. When the parameters are being updated for processing time ti+1, the variable parameters from time ti are subtracted from the values in the data structure, then updated for time ti+1, and the new values are added to the values in the data structure.
  • While a detailed description of one form of nodal analysis is presented herein, many additional and alternative techniques, approaches, and concepts may be applied in the context of the present invention. Likewise, a variety of different circuits can be simulated using the present system, such as all-inductive circuits with no capacitors, circuits with resistors and voltage or current sources, but neither inductors nor capacitors, to name just a few examples. Parameters may be set for a branch or combination of branches to model ideal or real circuit components or sub-circuits, such as diodes, transistors, thyristors, or triacs.
  • Using the techniques discussed above for time-varying parameters, multiple models can be used for a single physical component or sub-circuit. For example, a detailed, computationally intensive model might be used for a component when it has a rapidly varying input. Then, when the input has settled to a slower-varying state, a simpler, less computationally intensive model may be substituted for the complex one.
  • Yet further, the data structures used to represent information in various embodiments of the present invention vary widely as well. The stated structures may be optimized for programming simplicity, code size, storage demands, computational efficiency, cross-platform transferability, or other considerations as would occur to one skilled in the art.
  • Partitioning of branch sets as discussed herein may employ a wide variety of algorithms as would occur to one skilled in the art. Some spanning tree algorithms that are well adapted for use with the present invention are presented in T. H. Cormen, C. E. Leiserson, R. L. Rivest, Introduction to Algorithms, MIT Press, McGraw Hill, 1993; and R. E. Tarjan, Data Structures and Network Algorithms, Bell Laboratories, Murray Hill, 1983. The implementation of partitioning algorithms to the data structures involved will likely be a consideration for each particular implementation of the present invention.
  • All publications, prior applications, and other documents cited herein are hereby incorporated by reference in their entirety as if each had been individually incorporated by reference and fully set forth.
  • While the invention has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiments have been shown and described and that all changes and modifications that would occur to one skilled in the relevant art are desired to be protected. It should also be understood that while the use of the word “preferable,” “preferably,” or “preferred” in the description above indicates that the feature so described may be more desirable in some embodiments, it nonetheless may not be necessary, and embodiments lacking the same may be contemplated as within the scope of the invention, that scope being defined only by the claims that follow. In reading the claims it is intended that when words such as “a,” “an,” “at least one,” “at least a portion,” and the like are used, there is no intention to limit the claim to exactly one such item unless specifically stated to the contrary in the claim.

Claims (16)

1. A method, comprising:
creating one or more data structures sufficient to model an electronic circuit as a collection of n elements consisting of:
zero or more LRV elements, each having at least one of (a) a non-zero inductance parameter Lbr, (b) a non-zero resistance parameter rbr, or (c) a non-zero voltage source parameter ebr, but neither a non-zero capacitance parameter, nor a non-zero current source parameter, nor a switch parameter;
zero or more CRI elements, each having at least one of (a) a non-zero capacitance parameter Cbr, (b) a non-zero resistance parameter rbr, or (c) a non-zero current source parameter jbr, but neither a non-zero inductance parameter, nor a non-zero voltage source parameter, nor a switch parameter; and
zero or more switching elements, each having a switch state and neither a non-zero inductance parameter, a non-zero capacitance parameter, a non-zero resistance parameter, a non-zero voltage source parameter, nor a non-zero current source parameter; and
automatically generating a first set of state equations from said one or more data structures; and
simulating operation of the electronic circuit by application of said first set of state equations;
wherein n is at least two, and the collection comprises either
an LRV element for which at least two of Lbr, rbr, or ebr are non-zero, or
a CRI element for which at least two of Cbr, rbr, or jbr are non-zero.
2. The method of claim 1, wherein said simulating comprises producing state output data, the method further comprising:
modifying the parameters in said first set of state equations as a function of said state output data.
3. The method of claim 1, further comprising:
modifying the parameters in said first set of state equations based on a time-varying parameter of at least one element in said collection.
4. The method of claim 1, further comprising:
generating a second set of state equations from said one or more data structures upon the occurrence of a first topology change event.
5. The method of claim 4, wherein said generating said second set of state equations comprises modifying only the subset of said first set of state equations that depend on the one or more switching elements that have changed.
6. The method of claim 4, wherein each unique vector of switch states represents a topology of the overall circuit, and further comprising:
storing said first set of state equations in a cache;
after a second topology change event, determining whether a set of state equations in the cache represents the new topology;
if said determining is answered in the affirmative, using the set of state equations that represents the new topology to simulate operation of the circuit after the second topology change event; and
if said determining is answered in the negative, building a third set of state equations that represents the new topology, and using the third set of state equations to simulate operation of the circuit after the second topology change event.
7. The method of claim 6, further comprising:
storing said second set of state equations in a cache;
after a third topology change event, deciding whether a set of state equations in the cache represents the new topology;
if said deciding is concluded in the affirmative, using the set of state equations from the cache that represents the new topology to simulate operation of the circuit after the third topology change event; and
if said deciding is concluded in the negative, building a new set of state equations that represents the new topology, and using the new set of state equations to simulate operation of the circuit after the third topology change event.
8. A method, comprising:
creating one or more data structures that together store characteristics of a plurality of active branches Bactive that make up a graph of nodes and branches that form a circuit, wherein Bactive consists of
a set BL of zero or more inductive branches, each having a non-zero inductive component but neither a capacitive component nor a variable switch state;
a set BC of zero or more capacitive branches, each having a non-zero capacitive component but neither an inductive component nor a variable switch state; and
a set BA of additional branches, each having neither an inductive component, nor a capacitive component;
partitioning Bactive into a first branch set Btree active and a second branch set Blink active, where the branches in Btree active form a spanning tree over Bactive, giving priority in said partitioning to branches not in BL over branches in BL;
sub-partitioning Blink active into a third branch set Blink L and a fourth branch set Blink CA, where Blink L=Blink active∩BL;
identifying a fifth branch set BCA as the union of
Blink CA,
BC∩Btree active, and
those branches in Btree active that form a closed graph when combined with Blink CA;
partitioning BCA into a sixth branch set {tilde over (B)}tree CA and a seventh branch set {tilde over (B)}link CA where the branches in {tilde over (B)}tree CA form a spanning tree over BCA, giving priority in said partitioning to branches in BC over branches not in BC;
identifying an eighth branch set Btree C={tilde over (B)}tree CA∩BC;
selecting a set of state variables comprising:
for each branch of Blink L, either the inductor current or inductor flux, and
for each branch of Btree C either the capacitor voltage or capacitor charge; and
simulating a plurality of states of the circuit using the set of state variables.
9. The method of claim 8, wherein said partitioning steps each comprise an application of a weighted spanning tree algorithm.
10. The method of claim 9 wherein, for some positive numbers wL and wC:
for the partitioning of Bactive, a minimum spanning tree algorithm is used with weight function
ω L ( b j ) = { w L if branch b j B L 0 otherwise ; and
for the partitioning of BCA, a maximum spanning tree algorithm is used with weight function
ω C ( b j ) = { w C if branch b j B C 0 otherwise .
11. A system, comprising a processor and a computer-readable medium in communication with said processor, said medium containing programming instructions executable by said processor to:
build state equations for a first topology of an electronic circuit having at least two switching elements, wherein each switching element has a switching state;
solve said state equations at time ti to provide a state output vector, in which at least two elements control the switching states of the switching elements;
calculate the value of a switching variable as a function of the state output vector, wherein the value reflects whether the switching state of at least one of the switching elements is changing; and
if the value of the switching variable at time ti indicates that at least one of the switching elements is changing, determine a second topology of the electronic circuit for time ti + and obtain state equations for the second topology.
12. The system of claim 11, wherein:
said programming instructions comprise a state equation building module, a solver module for ordinary differential equations, and a switching logic module;
said building is performed by the state equation building module;
said solving and calculating are performed by the solver module; and
said determining is performed by the switching logic module.
13. The system of claim 12, wherein said obtaining is performed by said switching logic module.
14. The system of claim 12, wherein said obtaining is performed by said state equation building module.
15. The system of claim 12, wherein:
at a time tj, at least two switching elements are each either rising-sensitive or falling-sensitive switches, wherein
rising-sensitive switches change switching state if and only if a controlling element of the state vector has passed from a negative value to a non-negative value; and
falling-sensitive switches change switching state if and only if a controlling element of the state vector has passed from a positive value to a non-positive value; and
the function is the arithmetic maximum of
a maximum of all elements of the state vector that control rising-sensitive switches, and
the negative of the minimum of all controlling elements of the state vector that control falling-sensitive switches.
16. A system for simulating electronic circuits, comprising a processor and a computer-readable medium in communication with said processor, said medium containing programming instructions executable by said processor to read element parameters and node connection information from a data stream comprising at least one switch type specification, the at least one switch type specification being selected from the group consisting of:
a unidirectional, unlatched switch;
a bidirectional, unlatched switch;
a unidirectional, latched switch; and
a bidirectional, latched switch; and
wherein said instructions are further executable by said processor automatically to calculate state equations for the circuit given the states of switches specified by said at least one switch type specification.
US12/060,556 2001-01-11 2008-04-01 Circuit simulation Abandoned US20090012770A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/060,556 US20090012770A1 (en) 2001-01-11 2008-04-01 Circuit simulation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US26103301P 2001-01-11 2001-01-11
US10/043,981 US7353157B2 (en) 2001-01-11 2002-01-11 Circuit simulation
US12/060,556 US20090012770A1 (en) 2001-01-11 2008-04-01 Circuit simulation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/043,981 Continuation US7353157B2 (en) 2001-01-11 2002-01-11 Circuit simulation

Publications (1)

Publication Number Publication Date
US20090012770A1 true US20090012770A1 (en) 2009-01-08

Family

ID=22991685

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/043,981 Expired - Lifetime US7353157B2 (en) 2001-01-11 2002-01-11 Circuit simulation
US12/060,556 Abandoned US20090012770A1 (en) 2001-01-11 2008-04-01 Circuit simulation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/043,981 Expired - Lifetime US7353157B2 (en) 2001-01-11 2002-01-11 Circuit simulation

Country Status (3)

Country Link
US (2) US7353157B2 (en)
AU (1) AU2002243494A1 (en)
WO (1) WO2002056145A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052651A1 (en) * 2006-08-08 2008-02-28 Sheng-Guo Wang Methods to generate state space models by closed forms for general interconnect and transmission lines, trees and nets, and their model reduction and simulations
US20100199245A1 (en) * 2004-10-29 2010-08-05 Synopsys, Inc. Non-Linear Receiver Model For Gate-Level Delay Calculation
US20140049235A1 (en) * 2012-08-14 2014-02-20 Chengdu Monolithic Power Systems Co., Ltd. Switching regulator and the method thereof
CN106326509A (en) * 2015-06-29 2017-01-11 田宇 Circuit simulation method and device
US10445448B2 (en) * 2017-12-14 2019-10-15 Yu Tian Method and system for circuit simulation

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6983432B2 (en) * 2001-05-04 2006-01-03 International Business Machines Corporation Circuit and method for modeling I/O
US20030154059A1 (en) * 2001-11-30 2003-08-14 Jorg-Uwe Feldmann Simulation apparatus and simulation method for a system having analog and digital elements
US7191113B2 (en) * 2002-12-17 2007-03-13 International Business Machines Corporation Method and system for short-circuit current modeling in CMOS integrated circuits
US6944834B2 (en) * 2003-01-22 2005-09-13 Stmicroelectrontronics, Inc. Method and apparatus for modeling dynamic systems
US7296054B2 (en) * 2003-01-24 2007-11-13 The Mathworks, Inc. Model simulation and calibration
US8805664B1 (en) * 2003-08-11 2014-08-12 The Mathworks, Inc. System and method for simulating branching behavior
DE102004014178B4 (en) * 2004-03-23 2006-04-27 Infineon Technologies Ag Computer-aided, automated method for verification of analog circuits
US7555733B1 (en) * 2005-09-18 2009-06-30 Infinisim, Inc. Hierarchical partitioning
US7818158B2 (en) * 2005-09-21 2010-10-19 Synopsys, Inc. Method for symbolic simulation of circuits having non-digital node voltages
US7774725B1 (en) * 2005-11-04 2010-08-10 Purdue Research Foundation Computationally efficient modeling and simulation of large scale systems
US20070136044A1 (en) * 2005-12-13 2007-06-14 Beattie Michael W Efficient simulation of dominantly linear circuits
US8112264B1 (en) 2006-05-31 2012-02-07 Worldwide Pro Ltd. Simulating circuits using network tearing
US8694302B1 (en) 2006-05-31 2014-04-08 Worldwide Pro Ltd. Solving a hierarchical circuit network using a Barycenter compact model
US8738335B1 (en) 2006-05-31 2014-05-27 Worldwide Pro Ltd. Solving a circuit network in hierarchical, multicore, and distributed computing environment
US7827016B1 (en) 2006-05-31 2010-11-02 William Wai Yan Ho Simulating circuits by distributed computing
US8019575B1 (en) * 2006-06-30 2011-09-13 The Mathworks, Inc. State projection via minimization of error energy
EP2257874A4 (en) * 2008-03-27 2013-07-17 Rocketick Technologies Ltd Design simulation using parallel processors
US8245165B1 (en) * 2008-04-11 2012-08-14 Cadence Design Systems, Inc. Methods and apparatus for waveform based variational static timing analysis
US9032377B2 (en) 2008-07-10 2015-05-12 Rocketick Technologies Ltd. Efficient parallel computation of dependency problems
KR101607495B1 (en) * 2008-07-10 2016-03-30 로케틱 테크놀로지즈 리미티드 Efficient parallel computation of dependency problems
US8543368B1 (en) * 2008-08-05 2013-09-24 Marvell Israel (M.I.S.L.) Ltd. Method and system for testing
US8195439B1 (en) * 2008-09-02 2012-06-05 Infinisim, Inc. Real-time adaptive circuit simulation
US8667455B1 (en) 2010-06-11 2014-03-04 Worldwide Pro Ltd. Hierarchical visualization-based analysis of integrated circuits
US9128748B2 (en) 2011-04-12 2015-09-08 Rocketick Technologies Ltd. Parallel simulation using multiple co-simulators
WO2012169000A1 (en) * 2011-06-06 2012-12-13 富士通株式会社 Analog circuit simulator and analog circuit verification method
US8515715B2 (en) * 2011-06-17 2013-08-20 International Business Machines Corporation Method, system and program storage device for simulating electronic device performance as a function of process variations
US20140288911A1 (en) * 2013-03-25 2014-09-25 Nvidia Corporation System and method for simulating integrated circuit performance on a many-core processor
US10332024B2 (en) 2017-02-22 2019-06-25 Rigetti & Co, Inc. Modeling superconducting quantum circuit systems
JP7043178B2 (en) * 2017-03-23 2022-03-29 太陽誘電株式会社 Simulation method of equivalent circuit of passive element and its device
CN106991221B (en) * 2017-03-24 2020-04-24 清华大学 Segmented broken line modeling method based on transient physical process of IGBT device
US10380314B1 (en) * 2017-05-10 2019-08-13 Cadence Design Systems, Inc. System and method for estimating current in an electronic circuit design
DE102017113594A1 (en) * 2017-06-20 2018-12-20 Dspace Digital Signal Processing And Control Engineering Gmbh Computer-implemented method for simulating an overall electrical circuit
EP3881216A1 (en) * 2018-11-15 2021-09-22 dspace digital signal processing and control engineering GmbH Computer-implemented method for simulating an electrical circuit
CN110414118B (en) * 2019-07-23 2023-05-05 上海电机学院 Boost converter modeling method based on separation modeling and application
DE102020112035B3 (en) 2020-05-05 2021-07-22 Bayerische Motoren Werke Aktiengesellschaft Method for detecting an electric arc in an on-board network by means of a visibility graph, control device and on-board network

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4827427A (en) 1987-03-05 1989-05-02 Hyduke Stanley M Instantaneous incremental compiler for producing logic circuit designs
US5461574A (en) 1989-03-09 1995-10-24 Fujitsu Limited Method of expressing a logic circuit
US5193068A (en) 1990-10-01 1993-03-09 Northern Telecom Limited Method of inducing off-circuit behavior in a physical model
US5467291A (en) 1991-09-09 1995-11-14 Hewlett-Packard Company Measurement-based system for modeling and simulation of active semiconductor devices over an extended operating frequency range
US5694579A (en) 1993-02-18 1997-12-02 Digital Equipment Corporation Using pre-analysis and a 2-state optimistic model to reduce computation in transistor circuit simulation
US5550760A (en) * 1993-02-18 1996-08-27 Digital Equipment Corporation Simulation of circuits
US5469366A (en) * 1993-09-20 1995-11-21 Lsi Logic Corporation Method and apparatus for determining the performance of nets of an integrated circuit design on a semiconductor design automation system
US5812431A (en) 1994-06-13 1998-09-22 Cadence Design Systems, Inc. Method and apparatus for a simplified system simulation description
US5732192A (en) 1994-11-30 1998-03-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Global qualitative flow-path modeling for local state determination in simulation and analysis
US5740347A (en) 1995-05-01 1998-04-14 Synopsys, Inc. Circuit analyzer of black, gray and transparent elements
US5870585A (en) 1995-10-10 1999-02-09 Advanced Micro Devices, Inc. Design for a simulation module using an object-oriented programming language
US5838947A (en) 1996-04-02 1998-11-17 Synopsys, Inc. Modeling, characterization and simulation of integrated circuit power behavior
US5920484A (en) 1996-12-02 1999-07-06 Motorola Inc. Method for generating a reduced order model of an electronic circuit
US6181754B1 (en) 1998-06-12 2001-01-30 Cadence Design Systems, Inc. System and method for modeling mixed signal RF circuits in a digital signal environment
US6295635B1 (en) 1998-11-17 2001-09-25 Agilent Technologies, Inc. Adaptive Multidimensional model for general electrical interconnection structures by optimizing orthogonal expansion parameters

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100199245A1 (en) * 2004-10-29 2010-08-05 Synopsys, Inc. Non-Linear Receiver Model For Gate-Level Delay Calculation
US8205177B2 (en) * 2004-10-29 2012-06-19 Synopsys, Inc. Non-linear receiver model for gate-level delay calculation
US20080052651A1 (en) * 2006-08-08 2008-02-28 Sheng-Guo Wang Methods to generate state space models by closed forms for general interconnect and transmission lines, trees and nets, and their model reduction and simulations
US7805686B2 (en) * 2006-08-08 2010-09-28 Sheng-Guo Wang Methods to generate state space models by closed forms for general interconnect and transmission lines, trees and nets, and their model reduction and simulations
US20140049235A1 (en) * 2012-08-14 2014-02-20 Chengdu Monolithic Power Systems Co., Ltd. Switching regulator and the method thereof
US8941367B2 (en) * 2012-08-14 2015-01-27 Chengdu Monolithic Power Systems Co., Ltd. Switching regulator and the method of generating a peak current signal for the switching regulator
CN106326509A (en) * 2015-06-29 2017-01-11 田宇 Circuit simulation method and device
US10445448B2 (en) * 2017-12-14 2019-10-15 Yu Tian Method and system for circuit simulation

Also Published As

Publication number Publication date
WO2002056145A2 (en) 2002-07-18
AU2002243494A1 (en) 2002-07-24
US7353157B2 (en) 2008-04-01
WO2002056145A3 (en) 2002-10-10
US20020183990A1 (en) 2002-12-05

Similar Documents

Publication Publication Date Title
US7353157B2 (en) Circuit simulation
US5379231A (en) Method and apparatus for simulating a microelectric interconnect circuit
Goodrich et al. Sorting, searching, and simulation in the mapreduce framework
Palenius et al. Comparison of reduced-order interconnect macromodels for time-domain simulation
Frasca et al. Linear passive networks with ideal switches: Consistent initial conditions and state discontinuities
Ferranti et al. Physics-based passivity-preserving parameterized model order reduction for PEEC circuit analysis
Shi Graph-pair decision diagram construction for topological symbolic circuit analysis
US20080052651A1 (en) Methods to generate state space models by closed forms for general interconnect and transmission lines, trees and nets, and their model reduction and simulations
Hendrickson et al. Skewed graph partitioning
Thornton Modeling digital switching circuits with linear algebra
WO2006132639A1 (en) Circuit splitting in analysis of circuits at transistor level
Nie et al. Real-time transient simulation based on a robust two-layer network equivalent
Blankenstein Geometric modeling of nonlinear RLC circuits
Yu et al. Efficient approximation of symbolic network functions using matroid intersection algorithms
Scholz The signature method for DAEs arising in the modeling of electrical circuits
Bizzarri et al. Shooting by a two-step Galerkin method
Thornton Simulation and implication using a transfer function model for switching logic
De Camillis et al. Parameterized partial element equivalent circuit method for sensitivity analysis of multiport systems
Cong et al. An optimal performance-driven technology mapping algorithm for LUT-based FPGAs under arbitrary net-delay models
Wang et al. Graph-theory-based simplex algorithm for VLSI layout spacing problems with multiple variable constraints
Batterywala et al. Efficient DC analysis of RVJ circuits for moment and derivative computations of interconnect networks
Nguyen et al. Adjoint transient sensitivity computation in piecewise linear simulation
Jatskevich et al. Automated state-variable formulation for power electronic circuits and systems
Zhu et al. An unconditional stable general operator splitting method for transistor level transient analysis
Wagner et al. An advanced equation assembly module

Legal Events

Date Code Title Description
AS Assignment

Owner name: P.C. KRAUSE & ASSOCIATES, INC., INDIANA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WASYNCZUK, OLEG;JATSKEVICH, JURI;REEL/FRAME:021343/0838

Effective date: 20080130

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION