## Monday, March 21, 2011

### Como category seminar: Open Markov chains

Today (16th March 2011) was the defence of the doctoral thesis of our student Luisa de Francesco Albasini. I will describe one aspect of her work. Traditional markov chains are an extremely useful tool in diverse applications. Just to give one example Google uses a markov chain to rank web pages creating some order in the internet.
A markov chain is a system whose dynamics is encoded in a square matrix of non-negative reals whose row sums are all 1. The rows (and columns) may be thought of as states of a system and the i,j th element of the matrix as the probability of a transition from state i to state j. The k th power of the matrix gives the probabilities of paths of length k.
A markov chain is easily represented also as a labelled graph or automaton:
Systems modelled in this way are closed systems. An essential aspect of compositionality of systems
is that one needs to consider open systems.

Luisa gives an answer as to what is an open markov chain and the algebra of such. Aspects of this idea occur in other studies but the combination is new; in particular the constants of the algebra. More details may be found in the two papers [1],[2] below.

So what is an open markov chain? It consists of a family of matrices of non-negative nxn matrices rather than just one. Each matrix in the family describes the probability of transitions between states in the presence of given input/output, or better with a given interaction with the external. The sum of all the matrices in the family is a markov matrix though each individual matrix is not.
An open markov chain may also be represented as an automaton, but now labelled not only by probabilities but also input/output symbols:
The corresponding automaton:

Notice that in the automaton the interaction with the external is pictured as wires with possible signals on the wires, and there may be several such wires. For the algebra they are organized as wires on the left and wires on the right - but one should not really think in terms of input on the left and output on the right. The wires are not input/output wires but channels to the environment.

What we have described so far fits into the classical notion of weighted transducer, though that notion is usually thought of as translating input to output.

What are important now are the binary operations and constants. The main binary operation is the circuit-like operation of joining the right-hand wires of one automaton R to the left-hand wires of another S, which composite we denote as RS.
The formula for this operation is analogous to the formula for composition of relations, or of matrices:
Actually I need to make a correction: this formula does not necessarily result in an open markov chain because the condition about the row sums may be violated. In doing this composition the transitions are restricted to those which are possible in the combined system. This means that the matrices of RS need to be normalized (and an extra condition is needed to assure that zero row sums do not occur in the total matrix of RS).

This operation exists in the work of Shutzenberger and Eilenberg on transducers.
In addition to this operation (and a similar one analogous to the parallel composition of circuits) we introduce the following important constants:
the identity, the diagonal, the twist, the opposite diagonal, the turn and the opposite turn. These are wire operations and have the following pictorial representations:
These constant automata are all one state automata, in which there is a transition for every combination of signals (I/O), and all transitions have the same probability. Using the constants and the binary operations we can form "circuits" of open markov chains. If the circuits are closed we get a traditional markov chain as a result.
This structure of the markov chain may be highly significant in the applications. The open "components" of a closed markov chain may be the entities of interest. What you can prove about a markov chain may come from the intuition about its open components rather than just from the global states and transition probabilities (just as a building may be modelled by a huge system of differential equations, or by considering how it is built from components).

If you want to see some examples, or some precise definitions look at the papers below.
This has been an experiment in using graphics in this blog - it hasn't proved to be too difficult.

[1]  L. de Francesco Albasini, N. Sabadini, R.F.C. Walters: The compositional construction of Markov processes, Applied Categorical Structure, Springer Netherlands, 2010 (preliminary version arXiv:0901.2434, 16.01.2009), and
[2]  L. de Francesco Albasini, N. Sabadini, R.F.C. Walters: The compositional construction of Markov processes II, to appear in RAIRO-Theoretical Informatics and applications, EDP Sciences (preliminary version arXiv:1005.0949v1, 6.05.2010).

Update:  Here is a link to Luisa's thesis. Notice that what I called open Markov chains are called instead Markov automata in the thesis.

Labels: