Signal-flow graph

From Wikipedia, the free encyclopedia

Jump to: navigation, search
"Mason graph" redirects here. For other flow graphs, see Flow graph (mathematics).

A signal-flow graph or signal-flowgraph (SFG), invented by Shannon,[1] but often called a Mason graph after Samuel Jefferson Mason who coined the term,[2] is a specialized flow graph, a directed graph in which nodes represent system variables, and branches (edges or arcs) represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs (also called digraphs), which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications.[3][4]

SFG's are most commonly used to represent signal flow in a physical system and its controller(s), forming a cyber-physical system. Among their other uses are the representation of signal flow in various electronic networks and amplifiers, digital filters, state variable filters and some other types of analog filters. In nearly all literature, a signal-flow graph is associated with a set of linear equations.


Wai-Kai Chen wrote: "The concept of a signal-flow graph was originally worked out by Shannon [1942] [1] in dealing with analog computers. The greatest credit for the formulation of signal-flow graphs is normally extended to Mason [1953],[2] [1956].[5] He showed how to use the signal-flow graph technique to solve some difficult electronic problems in a relatively simple manner. The term signal flow graph was used because of its original application to electronic problems and the association with electronic signals and flowcharts of the systems under study."[6]

Lorens wrote: "Previous to Mason's work, C. E. Shannon[7] worked out a number of the properties of what are now known as flow graphs. Unfortunately, the paper originally had a restricted classification and very few people had access to the material."[8]

"The rules for the evaluation of the graph determinant of a Mason Graph were first given and proven by Shannon [1942] using mathematical induction. His work remained essentially unknown even after Mason published his classical work in 1953. Three years later, Mason [1956] rediscovered the rules and proved them by considering the value of a determinant and how it changes as variables are added to the graph. [...]"[9]

Shannon-Happ formula[edit]

McNamee and Okrut, who based their electrical circuit software on the Shannon-Happ formula, wrote abour Shannon's and Happ's contributions.[10] For a set of linear relations that satisfy a consistency criteria, there is a formula by which the solution can be obtained and does not require stability criteria for each relation. Claude Shannon originally developed the formula while investigating the functional operation of an analog computer. Shannon's formula is an analytic expression for calculating the gain of an interconnected set of amplifiers in an analog computing network. Happ generalized it for topologically closed systems.

Domain of application[edit]

Robichaud et al. identify the domain of application of SFGs as follows:[11]

"All the physical systems analogous to these networks [constructed of ideal transformers, active elements and gyrators] constitute the domain of application of the techniques developed [here]. Trent[12] has shown that all the physical systems which satisfy the following conditions fall into this category.
  1. The finite lumped system is composed of a number of simple parts, each of which has known dynamical properties which can be defined by equations using two types of scalar variables and parameters of the system. Variables of the first type represent quantities which can be measured, at least conceptually, by attaching an indicating instrument to two connection points of the element. Variables of the second type characterize quantities which can be measured by connecting a meter in series with the element. Relative velocities and positions, pressure differentials and voltages are typical quantities of the first class, whereas electric currents, forces, rates of heat flow, are variables of the second type. Firestone has been the first to distinguish these two types of variables with the names across variables and through variables.
  2. Variables of the first type must obey a mesh law, analogous to Kirchhoff's voltage law, whereas variables of the second type must satisfy an incidence law analogous to Kirchhoff's current law.
  3. Physical dimensions of appropriate products of the variables of the two types must be consistent. For the systems in which these conditions are satisfied, it is possible to draw a linear graph isomorphic with the dynamical properties of the system as described by the chosen variables. The techniques [...] can be applied directly to these linear graphs as well as to electrical networks, to obtain a signal flow graph of the system."

Basic flow graph concepts[edit]

The following illustration and its meaning were introduced by Mason to illustrate basic concepts:[2]

(a) Simple flow graph, (b) The arrows of (a) incident on node 2 (c) The arrows of (a) incident on node 3

In the simple flow graphs of the figure, a functional dependence of a node is indicated by an incoming arrow, the node originating this influence is the beginning of this arrow, and in its most general form the signal flow graph indicates by incoming arrows only those nodes that influence the processing at the receiving node, and at each node, i, the incoming variables are processed according to a function associated with that node, say Fi. The flowgraph in (a) represents a set of explicit relationships:

 x_\mathrm{1} &= \text{an independent variable} \\
 x_\mathrm{2} &= F_2(x_\mathrm{1}, x_\mathrm{3})\\
 x_\mathrm{3} &= F_3(x_\mathrm{1}, x_\mathrm{2}, x_\mathrm{3})\\

Node x1 is an isolated node because no arrow is incoming; the equations for x2 and x3 have the graphs shown in parts (b) and (c) of the figure.

These relationships define for every node a function that processes the input signals it receives. Each non-source node combines the input signals in some manner, and broadcasts a resulting signal along each outgoing branch. "A flow graph, as defined originally by Mason, implies a set of functional relations, linear or not." [11]

However, the commonly used Mason graph is more restricted, assuming that each node simply sums its incoming arrows, and that each branch involves only the pair of nodes involved. Thus, in this more restrictive approach, the node x1 is unaffected while:

x_2=f_{21}(x_1, x_2)+f_{23}(x_2, x_3)
x_3=f_{31}(x_1, x_3)+f_{32}(x_2, x_3)+f_{33}(x_3) \ ,

and now the functions fij can be associated with the signal-flow branches ij joining the arguments xi, xj of these functions, rather than having general relationships associated with each node. Frequently these functions are simply multiplicative factors, for example, cij, where c is a scalar, but possibly a function of some parameter like the Laplace transform variable s.

Here are some terms used in SFG theory:[13]

  • Node. A node is a vertex representing a variable or signal.
    • Input node or source. Node with only outgoing branches (independent variable).
    • Output node or sink. Node with only incoming branches (dependent variable).
    • Mixed node. Node with both incoming and outgoing branches.
  • Path. A path is a continuous set of branches traversed in the direction indicated by the branch arrows.
    • Open path. If no node is re-visited, the path is open.
    • Forward path. A path from an input node (source) to an output node (sink) that does not re-visit any node.
    • Loop. A closed path. (it originates and ends on the same node, and no node is touched more than once).
    • Non-touching loops. Non-touching loops have no common nodes.
  • Graph reduction.
    • Residual node. In any contemplated process of graph reduction, the nodes to be retained in the new graph are called residual nodes.[2]

Linear signal-flow graphs[edit]

Linear signal-flow graph methods only apply to linear time-invariant systems, as studied by their associated theory. When modeling a system of interest, the first step is often to determine the equations representing the system's operation without assigning causes and effects (this is called acausal modeling). [14] A SFG is then derived from this system of equations. Robichaud et al. wrote: "The signal flow graph contains the same information as the equations from which it is derived; but there does not exist a one-to-one correspondence between the graph and the system of equations. One system will give different graphs according to the order in which the equations are used to define the variable written on the left-hand side."[11]

A linear SFG consists of nodes indicated by dots and weighted directional branches indicated by arrows. The nodes are the variables of the equations and the branch weights are the coefficients. Signals may only traverse a branch in the direction indicated by its arrow. The elements of a SFG can only represent the operations of multiplication by a coefficient and addition, which are sufficient to represent the constrained equations. When a signal traverses a branch in its indicated direction, the signal is multiplied the weight of the branch. When two or more branches direct into the same node, their outputs are added.

For systems described by linear algebraic or differential equations, the signal-flow graph is mathematically equivalent to the system of equations describing the system, and the equations governing the nodes are discovered for each node by summing incoming branches to that node. These incoming branches convey the contributions of the other nodes, expressed as the connected node value multiplied by the weight of the connecting branch, usually a real number or function of some parameter (for example a Laplace transform variable s).

For linear active networks, Choma writes:[15] "By a ‘signal flow representation’ [or ‘graph’, as it is commonly referred to] we mean a diagram that, by displaying the algebraic relationships among relevant branch variables of network, paints an unambiguous picture of the way an applied input signal ‘flows’ from input-to-output [...] ports."

A motivation for a SFG analysis is described by Chen:[16]

"The analysis of a linear system reduces ultimately to the solution of a system of linear algebraic equations. As an alternative to conventional algebraic methods of solving the system, it is possible to obtain a solution by considering the properties of certain directed graphs associated with the system." [See solving a system of equations by graph transformation rules.] "The unknowns of the equations correspond to the nodes of the graph, while the linear relations between them appear in the form of directed edges connecting the nodes. ...The associated directed graphs in many cases can be set up directly by inspection of the physical system without the necessity of first formulating the associated equations..."

Elements of a linear signal-flow graph[edit]

Elements and constructs of a signal flow graph. a) a node labeled x, b) a branch with a multiplicative gain of m, c) a branch with a multiplicative gain of one, d) an input node labeled Vin with an optional multiplicative gain of m, e) an output node labeled Iout with an optional multiplicative gain of m. f) addition, g) a loop with a loop gain of Am, h) a complex expression Z = c( aX + bY ).

The topology of a signal flow graph has a one to one relationship with a system of linear equations[citation needed] of the following form:

 x_\mathrm{j} &= \sum_{\mathrm{k}=1}^{\mathrm{N}} t_\mathrm{jk} x_\mathrm{k}
where tjk = transmission (or gain) from xk to xj.

For some applications, the directed nature of the branches can be interpreted usefully in terms of cause and effect:[17]

 ( \mathrm{effect \ at \  j} ) &= \sum_{\mathrm{k}=1}^{\mathrm{N}} ( \mathrm{transmission \ from \ k \ to \ j } )( \mathrm{cause \ at \  k} )

The figure to the right depicts various elements and constructs of a signal flow graph (SFG).

  • Item a) is a node. In this case, the node is labeled “x”.
  • Item b) is a branch with a multiplicative gain of m. The meaning is that the output, at the tip of the arrow, is m times the input at the tail of the arrow. The gain can be a simple constant or a function (for example: a function of some transform variable such as s, ω, or z, for Laplace, Fourier or Z-transform relationships).
  • Item c) is a branch with a multiplicative gain of one. When the gain is omitted, it is assume to be unity.
  • Item d) is an input node. In this case it is labeled “Vin”. Input nodes are characterized by having one or more attached arrows pointing away from the node and no arrows pointing into the node. In this case, the input is multiplied by the optional gain m. Any complete SFG will have at least one input node.
  • Item e) is an explicit output node labeled "Iout". Although any node can be an output, explicit output nodes are often used to provide clarity. Explicit output nodes are characterized by having one or more attached arrows pointing into the node and no arrows pointing away from the node. Explicit output nodes are not required.
  • Item f) depicts addition. When two or more arrows point into a node, their outputs are added.
  • Item g) depicts a simple loop. The loop gain is (A)(m).
  • Item h) depicts the complex expression Z = aX + bY.

Terms used in linear SFG theory also include:[13]

  • Path gain: the product of the gains of all the branches in the path.
  • Loop gain: the product of the gains of all the branches in the loop.

Modeling: Choosing the variables[edit]

In general, there are several ways of choosing the variables in a complex system. Corresponding to each choice, a system of equations can be written and each system of equations can be represented in a graph. This formulation of the equations becomes direct and automatic if one has at his disposal techniques which permit the drawing of a graph directly from the schematic diagram of the system under study. The structure of the graphs thus obtained is related in a simple manner to the topology of the schematic diagram, and it becomes unnecessary to consider the equations, even implicity, to obtain the graph. In some cases, one has simply to imagine the flow graph in the schematic diagram and the desired answers can be obtained without even drawing the flow graph.


Transfer Function[edit]

Signal-flow graphs are very often used with Laplace-transformed signals. When signals are Laplace-transformed, the gain is called a transfer function.

Solving linear equations with signal-flow graphs[edit]

Signal flow graphs can be used to solve sets of simultaneous linear equations.[19]

NOTE: Although the directional feature of a signal-flow graph often is taken as depicting cause-and-effect, the underlying model can represent equations, e.g. an acausal model. The directionality of the signal-flow graph represents the relationships between variables (the inflows to a variable represents its associated equation). The set of equations must be consistent and all equations must be linearly independent.

Putting the equations in "standard form"[edit]

Flow graph for three simultaneous equations. The edges incident on each node are colored differently just for emphasis. Rotating the figure by 120° simply permutes the indices.

For M equations with N unknowns where each yj is a known value and each xj is an unknown value, there is equation for each known of the following form.

\sum_{\mathrm{k}=1}^\mathrm{N} c_\mathrm{jk} x_\mathrm{k}  &=  y_\mathrm{j} 
\end{align} ; the usual form for simultaneous linear equations with 1 ≤ j ≤ M

Although it is feasible, particularly for simple cases, to establish a signal flow graph using the equations in this form, some rearrangement allows a general procedure that works easily for any set of equations, as now is presented. To proceed, first the equations are rewritten as

\sum_{\mathrm{k}=1}^{\mathrm{N}} c_{\mathrm{jk}} x_\mathrm{k}  -  y_\mathrm{j} &= 0 \end{align}

and further rewritten as

\sum_\mathrm{k=1}^\mathrm{N} c_\mathrm{jk} x_\mathrm{k} +x_\mathrm{j}  -  y_\mathrm{j} &= x_\mathrm{j} \end{align}

and finally rewritten as

\sum_\mathrm{k=1}^\mathrm{N} ( c_\mathrm{jk} + \delta_\mathrm{jk}) x_\mathrm{k} -  y_\mathrm{j} &= x_\mathrm{j} \end{align}  ; form suitable to be expressed as a signal flow graph.
where δkj = Kronecker delta

The final form for the equations is as specified in the section Elements of a linear signal-flow graph. The signal-flow graph is now arranged by selecting one of these equations and addressing the node on the right-hand side. This is the node for which the node connects to itself with the branch of weight including a '+1', making a closed loop in the flow graph. The other terms in that equation connect this node first to the source in this equation and then to all the other branches incident on this node. Every equation is treated this way, and then each incident branch is joined to its respective emanating node. As there is a basic symmetry in the treatment of every node, a simple starting point is an arrangement of nodes with each node at one vertex of a regular polygon. When expressed using the general coefficients {cin}, the environment of each node is then just like all the rest apart from a permutation of indices. Such an implementation for a set of three simultaneous equations is seen in the figure.[20]

Often the known values, yj are taken as the primary causes and the unknowns values, xj to be effects, but regardless of this interpretation, the last form for the set of equations can be represented as a signal-flow graph.

Applying Mason's gain formula to solve a system of linear equations[edit]

For more details on this topic, see Mason's gain formula.

In the most general case, the values for all the xk variables can be calculated by computing Mason's gain formula for the path from each yj to each xk and using superposition.

\begin{align} x_\mathrm{k} &=  \sum_{\mathrm{k}=1}^{\mathrm{M}} ( G_\mathrm{kj} ) y_\mathrm{j}  \end{align}
where Gkj = the sum of Mason's gain formula computed for all the paths from input yj to variable xk.

In general, there are N-1 paths from yj to variable xk so the computational effort to calculated Gkj is proportional to N-1. Since there are M values of yj, Gkj must be computed M times for a single value of xk. The computational effort to calculate a single xk variable is proportional to (N-1)(M). The effort to compute all the xk variables is proportional to (N)(N-1)(M). If there are N equations and N unknowns, then the computation effort is on the order of N3.

Systematic reduction of a linear flow graph to solve its gain (sources to sinks)[edit]

A signal-flow graph may be simplified by graph transformation rules.[21] A reduced signal flow graph relates dependent variables of interest (residual nodes, sinks) to its independent variables (sources) by a gain, which may be a transfer function.

Parallel edges. Replace parallel edges with a single edge having a gain equal to the sum of original gains.
Signal flow graph refactoring rule: replacing parallel edges with a single edge with a gain set to the sum of original gains.
The graph on the left has parallel edges between nodes. On the right, these parallel edges have been replaced with a single edge having a gain equal to the sum of the gains on each original edge.
The equations corresponding to the reduction between N and node I_1 are:

 N &= I_\mathrm{1} f_\mathrm{1} + I_\mathrm{1}f_\mathrm{2} + I_\mathrm{1} f_\mathrm{3} + ...\\
 N &= I_\mathrm{1} (f_\mathrm{1} +  f_\mathrm{2} +  f_\mathrm{3} ) + ...\\

Self-looping edge. Replace looping edges by adjusting the gains on the incoming edges.
Signal flow graph refactoring rule: a looping edge at node N is eliminated and inflow gains are multiplied by an adjustment factor.
The graph on the left has a looping edge at node N, with a gain of g. On the right, the looping edge has been eliminated, and all inflowing edges have their gain divided by (1-g).
The equations corresponding to the reduction between N and all its input signals are:

 N &= I_\mathrm{1} f_\mathrm{1} + I_\mathrm{2} f_\mathrm{2} + I_\mathrm{3} f_\mathrm{3} + N  g \\
 N  - N  g &= I_\mathrm{1} f_\mathrm{1} + I_\mathrm{2} f_\mathrm{2} + I_\mathrm{3}f_\mathrm{3} \\
 N (1-g) &= I_\mathrm{1} f_\mathrm{1} + I_\mathrm{2} f_\mathrm{2} + I_\mathrm{3} f_\mathrm{3} \\
 N  &= (I_\mathrm{1} f_\mathrm{1} + I_\mathrm{2} f_\mathrm{2} + I_\mathrm{3} f_\mathrm{3}) \div  (1-g) \\
 N  &= I_\mathrm{1} f_\mathrm{1}\div  (1-g) + I_\mathrm{2} f_\mathrm{2}\div  (1-g) + I_\mathrm{3} f_\mathrm{3} \div  (1-g) \\

Outflowing edges. Replace outflowing edges with edges directly flowing from the node's sources.
Signal flow graph refactoring rule: replacing outflowing edges with direct flows from inflowing sources.
The graph on the left has an intermediate node N between nodes from which it has inflows, and nodes to which it flows out. The graph on the right shows direct flows between these node sets, without transiting via N. For the sake of simplicity, N and its inflows are not represented. The outflows from N are eliminated.

Zero-signal nodes. Eliminate outflowing edges from a node determined to have a value of zero.
Signal flow graph refactoring rule: eliminating outflowing edges from a node known to have a value of zero.
If the value of a node is zero, its outflowing edges can be eliminated.

Nodes without outflows. Eliminate a node without outflows.
Signal flow graph refactoring rule: a node that is not of interest can be eliminated provided that it has no outgoing edges.
In this case, N is not a variable of interest, and it has no outgoing edges; therefore, N, and its inflowing edges, can be eliminated,


The above procedure for building the SFG from an acausal system of equations and for solving the SFG's gains have been implemented[22] as an add-on to MATHLAB 68,[23] an on-line system providing machine aid for the mechanical symbolic processes encountered in analysis.

Block diagram and signal-flow representations[edit]

Example: Block diagram and equivalent signal-flow graph representations. The input R(s) is the Laplace-transformed input signal; it is shown as a source node in the signal-flow graph (a source node has no input edges). The output signal C(s) is the Laplace-transformed output variable. It is represented as a sink node in the flow diagram (a sink has no output edges). G(s) and H(s) are transfer functions. The two flow graph representations are equivalent.

Choosing a block diagram or a signal-flow representation for the gain (or transfer function) is just a matter of convenience and preference.

Compared with a block diagram, a linear signal-flow graph is more constrained[24] in that the SFG rigorously describes linear algebraic equations represented by a directed graph.

Causality and signal-flow graphs[edit]

Causality of a signal-flow graph is an issue for simulation on an analog or digital computer, rather than an issue about the physical causality of a real-world system being modeled.


Consider a resistor having a value of R Ohm, denoting by VR the voltage signal at its terminals, and IR, the current flowing through the resistor from the positive terminal to the negative terminal. The relationship between R, VR, and IR is:

 V_R &= R \times I_R \\

In causal modeling, this relationship is implemented as one of the following assignment statements, depending on which of the three variables is computed from the two others:

 V_R &:= R \times I_R \\
 I_R &:= V_R \div R \\
 R &:= V_R \div I_R \\

Causal view[edit]

Mason wrote in 1953: "The flow graph may be interpreted as a signal transmission system in which each node is a tiny repeater station. The station receives signals via the incoming branches, combines the information in some manner,and then transmits the result along each outgoing branch."[2] Thus, a linear signal-flow graph may be regarded as a type of block diagram designed to represent cause and effect relationships in linear systems.[25] The causal interpretation is that a node's inputs are processed by the node function and assigned to the node's output.

Dean C. Karnopp,[26] who introduced causality concepts in bond graphs,[27] wrote : "For computer algorithms to solve equations representing the physics of real systems, it is essential that proper input and output causality be maintained. Some perfectly reasonable physical models simply will not compute because of causal problems."[28]

Acausal view[edit]

A signal-flow graph may also be regarded as a representation of a set of equations relating the node variables. In this interpretation, there is no input or output, but relationships between the node values.

Signal-flow graphs in dynamic systems engineering[edit]

Signal-flow graphs can be used for analysis, that is for understanding a model of an existing system, or for synthesis, that is for determining the properties of a design alternative.

Signal-flow graphs for dynamic systems analysis[edit]

When building a model of a dynamic system, a typical approach consists in applying these tasks:[29]

  • Defining the system and its components.
  • Formulating the mathematical model and list the needed assumptions.
  • Writing the differential equations describing the model.
  • Solving the equations for the desired output variables.
  • Examining the solutions and the assumptions, and reiterate if necessary.

In this workflow, equations of the physical system's mathematical model are used to derive the signal-flow graph equations.

Signal-flow graphs for design synthesis[edit]

Signal-flow graphs can also be used in design space exploration, for expressing the equations of synthetized topologies. In such synthesis workflow, the physical system does not yet exist (there is no physical system to be modeled); an optimization process seeks a suitable solution among different alternatives. For example, the state-space equations of a filter can be used to create different signal-flow graphs, from which are derived filter implementations.[30] The signal-flow graph can also be used as a specification from which are generated various physical circuits implementing the SFG.[31]

Criticisms of SFGs[edit]

In the article An Epistemic Prehistory of Bond Graphs[32], Paynter wrote about flow graphs before the invention of bond graphs:

But for over a decade we remained troubled by three features of these directed graphs:

  • they are unduly complex and confusing, especially for large systems;
  • they force premature assignment of causality, unlike circuit diagrams;
  • they do not enforce conservation constraints, again unlike circuit diagrams.

In the article The Behavioral Approach to Open and Interconnected Systems[33], Jan C. Willems wrote:

However, for modeling physical systems, it is often inappropriate. Input/output representations impose a cause/effect view of the interaction of a system with its environment, which is usually not part of the physical reality that the system describes. The difficulty with input/output thinking becomes even more pronounced in system interconnection. The requirement to endow a complex interconnected system with a signal flow graph to describe how subsystems interact is often arbitrary, and sometimes a caricature.

—Jan Camiel Willems

Linear signal-flow graph examples[edit]

Simple voltage amplifier[edit]

Figure 1: SFG of a simple amplifier

The amplification of a signal V1 by an amplifier with gain a12 is described mathematically by

V_2 = a_{12}V_1 \,.

This relationship represented by the signal-flow graph of Figure 1. is that V2 is dependent on V1 but it implies no dependency of V1 on V2. See Kou page 57.[34]

Ideal negative feedback amplifier[edit]

Figure 3: A possible signal-flow graph for the asymptotic gain model
Figure 4: A different signal-flow graph for the asymptotic gain model
A signal flow graph for a nonideal negative feedback amplifier based upon a control variable P relating two internal variables: xj=Pxi. Patterned after D.Amico et al.[35]

A possible SFG for the asymptotic gain model for a negative feedback amplifier is shown in Figure 3, and leads to the equation for the gain of this amplifier as

G = \frac {y_2}{x_1}  = G_{\infty} \left( \frac{T}{T + 1} \right) + G_0 \left( \frac{1}{T + 1} \right) \ .

The interpretation of the parameters is as follows: T = return ratio, G = direct amplifier gain, G0 = feedforward (indicating the possible bilateral nature of the feedback, possibly deliberate as in the case of feedforward compensation). Figure 3 has the interesting aspect that it resembles Figure 2 for the two-port network with the addition of the extra feedback relation x2 = T y1.

From this gain expression an interpretation of the parameters G0 and G is evident, namely:

G_{\infty} = \lim_{T \to \infty }G\ ; \ G_{0} = \lim_{T \to 0 }G \ .

There are many possible SFG's associated with any particular gain relation. Figure 4 shows another SFG for the asymptotic gain model that can be easier to interpret in terms of a circuit. In this graph, parameter β is interpreted as a feedback factor and A as a "control parameter", possibly related to a dependent source in the circuit. Using this graph, the gain is

G = \frac {y_2}{x_1}  = G_{0} +  \frac {A} {1 - \beta A} \ .

To connect to the asymptotic gain model, parameters A and β cannot be arbitrary circuit parameters, but must relate to the return ratio T by:

 T = - \beta A \ ,

and to the asymptotic gain as:

 G_{\infty} = \lim_{T \to \infty }G = G_0 - \frac {1} {\beta} \ .

Substituting these results into the gain expression,

G =  G_{0} + \frac {1} {\beta} \frac {-T} {1 +T}
 = G_0 + (G_0 - G_{\infty} ) \frac {-T} {1 +T}
 = G_{\infty} \frac {T} {1 +T} + G_0 \frac {1}{1+T}  \ ,

which is the formula of the asymptotic gain model.

Electrical circuit containing a two-port network[edit]

A simple schematic containing a two-port and it's equivalent signal flow graph.
Signal flow graph of a circuit containing a two port. The forward path from input to output is shown in a different color. The dotted line rectangle encloses the portion of the SFG that constitutes the two-port.

The figure to the right depicts a circuit that contains a y-parameter two-port network. Vin is the input of the circuit and V2 is the output. The two-port equations impose a set of linear constraints between its port voltages and currents. The terminal equations impose other constraints. All these constraints are represented in the SFG (Signal Flow Graph) below the circuit. There is only one path from input to output which is shown in a different color and has a (voltage) gain of -RLy21. There are also three loops: -Riny11, -RLy22, Riny21RLy12. Sometimes a loop indicates intentional feedback but it can also indicate a constraint on the relationship of two variables. For example, the equation that describes a resistor says that the ratio of the voltage across the resistor to the current through the resistor is a constant which is called the resistance. This can be interpreted as the voltage is the input and the current is the output, or the current is the input and the voltage is the output, or merely that the voltage and current have a linear relationship. Virtually all passive two terminal devices in a circuit will show up in the SFG as a loop.

The SFG and the schematic depict the same circuit, but the schematic also suggests the circuit's purpose. Compared to the schematic, the SFG is awkward but it does have the advantage that the input to output gain can be written down by inspection using Mason's rule.

Mechatronics : Position servo with multi-loop feedback[edit]

A depiction of a telescope controller and its signal flow graph
Angular position servo and signal flow graph. θC = desired angle command, θL = actual load angle, KP = position loop gain, VωC = velocity command, VωM = motor velocity sense voltage, KV = velocity loop gain, VIC = current command, VIM = current sense voltage, KC = current loop gain, VA = power amplifier output voltage, LM = motor inductance, VM = voltage across motor inductance, IM = motor current, RM = motor resistance, RS = current sense resistance, KM = motor torque constant (Nm/amp), T = torque, M = moment of inertia of all rotating components α = angular acceleration, ω = angular velocity, β = mechanical damping, GM = motor back EMF constant, GT = tachometer conversion gain constant,. There is one forward path (shown in a different color) and six feedback loops. The drive shaft assumed to be stiff enough to not treat as a spring. Constants are shown in black and variables in purple.

This example is representative of a SFG (signal-flow graph) used to represent a servo control system and illustrates several features of SFGs. Some of the loops (loop 3, loop 4 and loop 5) are extrinsic intentionally designed feedback loops. These are shown with dotted lines. There are also intrinsic loops (loop 0, loop1, loop2) that are not intentional feedback loops, although they can be analyzed as though they were. These loops are shown with solid lines. Loop 3 and loop 4 are also known as minor loops because they are inside a larger loop.

  • The forward path begins with θC, the desired position command. This is multiplied by KP which could be a constant or a function of frequency. KP incorporates the conversion gain of the DAC and any filtering on the DAC output. The output of KP is the velocity command VωC which is multiplied by KV which can be a constant or a function of frequency. The output of KV is the current command, VIC which is multiplied by KC which can be a constant or a function of frequency. The output of KC is the amplifier output voltage, VA. The current, IM, though the motor winding is the integral of the voltage applied to the inductance. The motor produces a torque, T, proportional to IM. Permanent magnet motors tend to have a linear current to torque function. The conversion constant of current to torque is KM. The torque, T, divided by the load moment of inertia, M, is the acceleration, α, which is integrated to give the load velocity ω which is integrated to produce the load position, θLC.
  • The forward path of loop 0 asserts that acceleration is proportional to torque and the velocity is the time integral of acceleration. The backward path says that as the speed increases there is a friction or drag that counteracts the torque. Torque on the load decreases proportionately to the load velocity until the point is reached that all the torque is used to overcome friction and the acceleration drops to zero. Loop 0 is intrinsic.
  • Loop1 represents the interaction of an inductor's current with its internal and external series resistance. The current through an inductance is the time integral of the voltage across the inductance. When a voltage is first applied, all of it appears across the inductor. This is shown by the forward path through  \frac {1} {s \mathrm{L}_\mathrm{M}} \, . As the current increases, voltage is dropped across the inductor internal resistance RM and the external resistance RS. This reduces the voltage across the inductor and is represented by the feedback path -(RM + RS). The current continues to increase but at a steadily decreasing rate until the current reaches the point at which all the voltage is dropped across (RM + RS). Loop 1 is intrinsic.
  • Loop2 expresses the effect of the motor back EMF. Whenever a permanent magnet motor rotates, it acts like a generator and produces a voltage in its windings. It does not matter whether the rotation is caused by a torque applied to the drive shaft or by current applied to the windings. This voltage is referred to as back EMF. The conversion gain of rotational velocity to back EMF is GM. The polarity of the back EMF is such that it diminishes the voltage across the winding inductance. Loop 2 is intrinsic.
  • Loop 3 is extrinsic. The current in the motor winding passes through a sense resister. The voltage, VIM, developed across the sense resister is fed back to the negative terminal of the power amplifier KC. This feedback causes the voltage amplifier to act like a voltage controlled current source. Since the motor torque is proportional to motor current, the sub-system VIC to the output torque acts like a voltage controlled torque source. This sub-system may be referred to as the "current loop" or "torque loop". Loop 3 effectively diminishes the effects of loop 1 and loop 2.
  • Loop 4 is extrinsic. A tachometer (actually a low power dc generator) produces an output voltage VωM that is proportional to is angular velocity. This voltage is fed to the negative input of KV. This feedback causes the sub-system from VωC to the load angular velocity to act like a voltage to velocity source. This sub-system may be referred to as the "velocity loop". Loop 4 effectively diminishes the effects of loop 0 and loop 3.
  • Loop 5 is extrinsic. This is the overall position feedback loop. The feedback comes from an angle encoder that produces a digital output. The output position is subtracted from the desired position by digital hardware which drives a DAC which drives KP. In the SFG, the conversion gain of the DAC is incorporated into KP.

See Mason's rule for development of Mason's Gain Formula for this example.

Terminology and classification of signal-flow graphs[edit]

There is some confusion in literature about what a signal-flow graph is; Henry Paynter, inventor of bond graphs, writes: "But much of the decline of signal-flow graphs [...] is due in part to the mistaken notion that the branches must be linear and the nodes must be summative. Neither assumption was embraced by Mason, himself !"[32]

Standards covering signal-flow graphs[edit]

  • IEEE Std 155-1960, IEEE Standards on Circuits: Definitions of Terms for Linear Signal Flow Graphs, 1960.
This IEEE standard defines a signal-flow graph as a network of directed branches representing dependent and independent signals as nodes. Incoming branches carry branch signals to the dependent node signals. A dependent node signal is the algebraic sum of the incoming branch signals at that node, i.e. nodes are summative.

State transition signal-flow graph[edit]

State transition signal-flow graph. Each initial condition is considered as a source (shown in blue).

A state transition SFG or state diagram is a simulation diagram for a system of equations, including the initial conditions of the states.[36]

Nonlinear flow graphs[edit]

Mason introduced both nonlinear and linear flow graphs. To clarify this point, Mason wrote : "A linear flow graph is one whose associated equations are linear."[2]

Examples of nonlinear branch functions[edit]

It we denote by xj the signal at node j, the following are examples of node functions that do not pertain to a linear time-invariant system:

 x_\mathrm{j} &= x_\mathrm{k} \times  x_\mathrm{l} \\
 x_\mathrm{k} &= abs(x_\mathrm{j})\\
 x_\mathrm{l} &= \log(x_\mathrm{k})\\
 x_\mathrm{m} &= t \times x_\mathrm{j} \text{ ,where } t \text{ represents time} \\

Examples of nonlinear signal-flow graph models[edit]

Applications of SFG techniques in various fields of science[edit]

See also[edit]


  1. ^ a b CE Shannon (January 1942). "The theory and design of linear differential equation machines". Fire Control of the US National Defense Research Committee: Report 411, Section D-2.  Reprinted in N. J. A. Sloane, Aaron D. Wyner, ed. (1993). Claude E. Shannon: Collected Papers. Wiley IEEE Press. p. 514. ISBN 978-0-7803-0434-5. 
  2. ^ a b c d e f Mason, Samuel J. (September 1953). "Feedback Theory - Some Properties of Signal Flow Graphs". Proceedings of the IRE: 1144–1156. The flow graph may be interpreted as a signal transmission system in which each node is a tiny repeater station. The station receives signals via the incoming branches, combines the information in some manner, and then transmits the results along each outgoing branch. 
  3. ^ Jørgen Bang-Jensen, Gregory Z. Gutin (2008). Digraphs. Springer. ISBN 9781848009981. 
  4. ^ Bela Bollobas (1998). Modern graph theory. Springer Science & Business Media. p. 8. ISBN 9781461206194. i
  5. ^ SJ Mason (July 1956). "Feedback Theory-Further Properties of Signal Flow Graphs". Proceedings of the IRE 44 (7): 920–926. doi:10.1109/JRPROC.1956.275147.  On-line version found at MIT Research Laboratory of Electronics.
  6. ^ Chen, Wai-Kai (1976). Applied Graph Theory : Graphs and Electrical Networks. Elsevier. ISBN 9781483164151. (WKC 1976, p. 167)
  7. ^ CE Shannon (January 1942). "The theory and design of linear differential equation machines". Fire Control of the US National Defense Research Committee: Report 411, Section D-2.  Reprinted in N. J. A. Sloane, Aaron D. Wyner, ed. (1993). Claude E. Shannon: Collected Papers. Wiley IEEE Press. p. 514. ISBN 978-0-7803-0434-5. 
  8. ^ Lorens, Charles Stanton (July 15, 1956), Vogel, Dan, ed., Technical Report 317 - Theory and applications of flow graphs, Research Laboratory of Electronics, MIT 
  9. ^ (WKC 1976, p. 169)
  10. ^ Okrent, Howard; McNamee, Lawrence P. (1970). "3. 3 Flowgraph Theory". NASAP-70 User's and Programmer's manual. Los Angeles, California: School of Engineering and Applied Science, University of California at Los Angeles. pp. 3–9. 
  11. ^ a b c Louis PA Robichaud, Maurice Boisvert, Jean Robert (1962). "Preface". Signal flow graphs and applications. Prentice Hall. p. x. ASIN B0000CLM1G. 
  12. ^ Horace M Trent (1955). "Isomorphisms between Oriented Linear Graphs and Lumped Physical Systems". Journal of the Acoustical Society of America 27 (3): 500 ff. 
  13. ^ a b Kuo, Benjamin C. (1967). Automatic Control Systems (2nd ed.). Prentice-Hall. pp. 59–60. 
  14. ^ Kofránek, J; Mateják, M; Privitzer, P; Tribula, M (2008), Causal or acausal modeling: labour for humans or labour for machines, Technical Computing Prague 2008. Conference Proceedings., Prague, p. 16 
  15. ^ J Choma, Jr (April 1990). "Signal flow analysis of feedback networks". IEEE Trans Circuits & Systems 37 (4): 455–463. doi:10.1109/31.52748. 
  16. ^ Wai-Kai Chen (1971). "Chapter 3: Directed graph solutions of linear algebraic equations". Applied graph theory. North-Holland Pub. Co. p. 140. ISBN 978-0444101051.  Partly accessible using Amazon's look-inside feature.
  17. ^ Kuo, Benjamin C. (1967). Automatic Control Systems (2nd ed.). Prentice-Hall. p. 56. 
  18. ^ (Robichaud 1962, p. ix)
  19. ^ "... solving a set of simultaneous, linear algebraic equations. This problem, usually solved by matrix methods, can also be solved via graph theory. " Deo, Narsingh (1974). Graph Theory with Applications to Engineering and Computer Science. Prentice-Hall of India. p. 416. ISBN 81-203-0145-5.  also on-line at [1]
  20. ^ Deo, Narsingh (1974). Graph Theory with Applications to Engineering and Computer Science. Prentice-Hall of India. p. 417. ISBN 81-203-0145-5.  also on-line at [2]
  21. ^ (Ogata 2002, p. 68, 106)
  22. ^ Labrèche P., presentation: Linear Electrical Circuits:Symbolic Network Analysis, 1977.
  23. ^ Carl Engelman, The legacy of MATHLAB 68, published in Proceeding SYMSAC '71 Proceedings of the second ACM symposium on Symbolic and algebraic manipulation, pages 29-41 [3]
  24. ^ "A signal flow graph may be regarded as a simplified version of a block diagram. ... for cause and effect ... of linear systems ...we may regard the signal-flow graphs to be constrained by more rigid mathematical rules, whereas the usage of the block-diagram notation is less stringent." Kuo, Benjamin C. (1991). Automatic Control Systems (6th ed.). Prentice-Hall. p. 77. ISBN 0-13-051046-7. 
  25. ^ Farid Golnaraghi, Benjamin C. Kuo (2009). "§3-2: Signal-flow graphs (SFGs)". Automatic control systems (9th ed.). Wiley. p. 119. ISBN 978-0470048962.  This view also is found on line in the third edition, page 64, §3.5.
  26. ^ Professor Emeritus, Department of Mechanical and Aerospace Engineering University of California Davis Davis
  27. ^ Karnopp, Dan; Rosenberg, R. C. (1968), Analysis and Simulation of Multiport Systems: The Bond Graph Approach to Physical System Dynamics, Cambridge: the M.I.T. Press 
  28. ^ Karnopp, Dean; Margolis, Donald (January 2001), "The language of interaction", Mechanical Engineering 
  29. ^ Dorf, Richard C.; Bishop,, Robert H. (2001). |chapter-url= missing title (help). Modern Control Systems. Prentice Hall. pp. Chap 2–2. ISBN 0-13-030660-6. 
  30. ^ Antao, B. A. A.; Brodersen, A.J. (June 1995). "ARCHGEN: Automated synthesis of analog systems". Very Large Scale Integration (VLSI) Systems, IEEE Transactions on 3 (2): 231 - 244. doi:10.1109/92.386223. 
  31. ^ Doboli, A.; Dhanwada, N.; Vemuri, R. (May 2000). "A heuristic technique for system-level architecture generation from signal-flow graph representations of analog systems". Circuits and Systems, 2000. Proceedings. ISCAS 2000 Geneva. The 2000 IEEE International Symposium on. doi:10.1109/ISCAS.2000.856026. 
  32. ^ a b Paynter, Henry (1992). "An Epistemic Prehistory of Bond Graphs". pp. 10, 15 pages. 
  33. ^ WILLEMS, JAN C. (19 November 2007). "The Behavioral Approach to Open and Interconnected Systems". Control Systems, IEEE (IEEE). doi:10.1109/MCS.2007.906923. 
  34. ^ Kou (1967, p. 57)
  35. ^ Arnaldo D’Amico, Christian Falconi, Gianluca Giustolisi, Gaetano Palumbo (April 2007). "Resistance of Feedback Amplifiers: A novel representation". IEEE Trans Circuits & Systems - II Express Briefs 54 (4): 298 ff. 
  36. ^ Houpis, Constantine H.; Sheldon, Stuart N. (2013). "section 8.8". Linear Control System Analysis and Design with MATLAB®, Sixth Edition. Boca Raton, FL: CRC press. pp. 171–172. ISBN 9781466504264. 
  37. ^ For example: Baran, Thomas A.; Oppenhiem, Alan V. (2011), INVERSION OF NONLINEAR AND TIME-VARYING SYSTEMS, Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE), IEEE, doi:10.1109/DSP-SPE.2011.5739226 
  38. ^ a b Guilherme, J.; Horta, N. C.; Franca, J. E. (1999). SYMBOLIC SYNTHESIS OF NON-LINEAR DATA CONVERTERS. 
  39. ^ BRZOZOWSKI, J.A.; McCLUSKEY, E. J. (1963). Signal Flow Graph Techniques for Sequential Circuit State Diagrams. IEEE TRANSACTIONS ON ELECTRONIC COMPUTERS (in English). IEEE. p. 97. 
  40. ^ Barry, J. R., Lee, E. A., & Messerschmitt, D. G. (2004). Digital communication (Third Edition ed.). New York: Springer. p. 86. ISBN 0-7923-7548-3. 
  41. ^ Hall, John E. (August 23, 2004). "The pioneering use of systems analysis to study cardiac output regulation". Am J Physiol Regul Integr Comp Physiol. doi:10.1152/classicessays.00007.2004. Retrieved 2015-01-20. 


Further reading[edit]

  • Ernest J. Henley and R. A. Williams (1973). Graph theory in modern engineering; computer aided design, control, optimization, reliability analysis. Academic Press. ISBN 978-0-08-095607-7.  Book almost entirely devoted to this topic.
  • Wai-Kai Chen (1976). Applied Graph Theory. North Holland Publishing Company. ISBN 0720423627.  Chapter 3 for the essentials, but applications are scattered throughout the book.
  • Wai-Kai Chen (May 1964). "Some applications of linear graphs". Contract DA-28-043-AMC-00073 (E). Coordinated Science Laboratory, University of Illinois, Urbana. 
  • K. Thulasiraman and M. N. S. Swamy (1992). Graphs: Theory and Algorithms. 6.10-6.11 for the essential mathematical idea. ISBN 0-471-51356-3. 
  • Shu-Park Chan (2006). "Graph theory". In Richard C. Dorf. Circuits, Signals, and Speech and Image Processing (3rd ed.). CRC Press. § 3.6. ISBN 978-1-4200-0308-6.  Compares Mason and Coates graph approaches with Maxwell's k-tree approach.
  • RF Hoskins (2014). "Flow-graph and signal flow-graph analysis of linear systems". In SR Deards, ed. Recent Developments in Network Theory: Proceedings of the Symposium Held at the College of Aeronautics, Cranfield, September 1961. Elsevier. ISBN 9781483223568.  A comparison of the utility of the Coates flow graph and the Mason flow graph.

External links[edit]