tag:blogger.com,1999:blog-74465942014-07-27T13:17:51.216-07:00Bob WaltersRobert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.comBlogger218125tag:blogger.com,1999:blog-7446594.post-55159352431078459812014-07-24T05:00:00.001-07:002014-07-25T09:54:27.195-07:00Lex total categories and Grothendieck toposes<div dir="ltr">In the very valuable volume Sketches of an Elephant, A Topos Theory Compendium, Volume 1, Peter Johnstone mentions a lecture of Andre Joyal at the 1981 Cambridge Summer Meeting where he listed seven different descriptions of 'what a topos is like'. His fifth one was</div><div dir="ltr"><br />(v) 'A topos is a totally cocomplete object in the meta-2-category of finitely complete categories'. <br /><br /><a name='more'></a><br /><br />Peter also writes that in the 20 years that have passed since that lecture, the category-theory community (and computer scientists) have added a few more descriptions to the list, and he mentions another six including</div><div dir="ltr"><br /> (ix ) 'A topos is the category of maps of a power allegory'</div><div dir="ltr"><br /></div><div dir="ltr">Recently a young topos theorist asked me about (v), which was a view of toposes first presented in [2]. I think that the description by Peter Johnstone makes mysterious what I believe is the simplest conceptual description of the notion of topos. (I haven't found any expanded explanation in the Elephant of (v).)</div><div dir="ltr">The idea begins as follows:<br /> The open sets of a topological space form a partially ordered set with arbitrary sups and finite intersections, with the property that intersection distributes over arbitrary sups: what is called abstractly a locale. We can state this in a slightly more sophisticated way. </div><div dir="ltr"><br /></div><div dir="ltr">Instead of considering the sup of a subset we may look at sups of downsets (subsets V such that if x < = y and y is in V then also x is in V).<br />It is clear that every subset U generates a downset ((U)) which has a sup iff U does, and supU=sup((U)). <br />Let PX be the poset of downsets of X. PX may also be seen as the set of ordered functions X^opp -> 2 (2={0 < = 1}); just look at the inverse image of 1 to find the corresponding downset.<br />Then there is an order preserving function yoneda: X -> PX which takes x to ((x)), the principal downset generated by x.<br /><br />FACT: yoneda:X->PX has a left adjoint iff X has arbitrary sups, and the left adoint takes U to supU.<br />proof. The adjunction condition is that supU < = x iff U < = ((x)), which clearly characterizes sup.<br /><br />FACT: if X has finite infs and arbitrary sups then sup : PX -> X preserves finite infs iff finite infs distribute over arbitrary sups in X; ie iff X is a locale.<br />proof. It is easy to see that U inf V = ((u inf v; u in U, v in V)). The preservation of finite infs means that <br />sup{u inf v; u in U, v in V} = sup((u inf v ; u in U, v in V)) = sup(U inf V) = supU inf supV.<br />A special case of this which however implies the general case is that (reading backwards) <br />u inf supV = sup(u inf v; v in V), the distribution of finite limits over arbitary sups.<br /><br />NEW DEFINITION: So we have a new definition of poset X being a LOCALE, namely that it is finitely complete and that yoneda has a left exact left adjoint. This definition has a straightforward extension to categories.<br />A category Y is LEX TOTAL [1] iff it is finitely complete and yoneda : X -> PX has a left exact left adjoint colimit : PX -> X where PX is the category of presheaves on X, or alternatively the category of discrete fibrations with small fibre over X. We will see that it is in the second sense that the left adjoint may be called colimit.<br /><br />Let me make some connection with the ideas I described above about locales.<br /><br />FACT: A category X has the property that yoneda : X -> PX has a left adjoint iff every discrete fibration with small fibres over X has a colimit. Further the adjoint to yoneda is colimit (regarding the object of PX as discrete fibrations).<br />proof. Yoneda takes object x in X to the fibration ((x))->X where ((x)) has objects arrows (y->x) <br />projecting to x, and arrows (z->y) : (z->y->x)->(y->x) projecting to z->y.<br />Hence the adjunction condition is that there is a natural bijection between arrows colimitF -> x and<br />functors over X, F -> ((x)). But it is not difficult to check that a functor (over X) from F to ((x)) is the same as a cocone from F to x, and hence the result.<br /><br />What about the fact that when we considered posets we said that in considering sups it was sufficient<br />to consider sups of downsets? The corrsponding fact here for categories [2] is that any functor Y->X where Y is small has a factorization through a discrete fibration over X with small fibres with the same colimit (if the colimit exists). However there is something new.<br /><br />SOMETHING NEW Not all discrete fibrations with small fibres come from functors with small domain.<br />So we have a new notion of cocompleteness - TOTAL COCOPLETENESS which means cocomplete for all discrete fibrations with small fibre. This is a much stronger condition (and in our opinion a sadly neglected one) than small completeness, and has a much stronger adjoint functor theorem.<br />It holds for most commonly used small complete categories, the first being sheaves on a small category where the left adjoint to yoneda of PX is P of yoneda of X.<br /><br />I think I will continue in another post. Suffice to say here that lex total categories, the direct generalization of locales, have all the completeness and exactness properties of Grothendieck toposes (lacking only possible a small set of generators). They also have, via the adjoint functor theorem, the constructions of elementary toposes, subject again to some smallness conditions.<br /><br />We believe that the notion of lex total category is the simplest conceptual description of Grothendieck<br />toposes, and deserves wider attention and use. <br /><br />NOTE: I should point out that there has been appreciation and development of these ideas by Wood and Rosebrugh, Tholen, Kelly, Street and Freyd, and others, beyond their introduction in the papers below by Street and Walters. I will give some references in the next post.</div><div dir="ltr"><br /></div><div dir="ltr">I almost forgot to make a remark about Peter's aspect (ix) of toposes. I think it is a pity that he did not use the characterization of elementary toposes in terms of bicategories of relations in [3].<br /><br /></div><div dir="ltr">[1] RH Street, RFC Walters, The comprehensive factorization of a functor, Bull. Amer. Math. Soc. 79, 1973<br />[2] RH Street, RFC Walters, Yoneda structures on 2-categories, J. Algebra, 50, 1978.<br />[3] A. Carboni, RFC Walters, Cartesian bicategories I, JPAA, 1987.</div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-58900875430644984982014-07-14T05:29:00.002-07:002014-07-14T05:43:27.678-07:00The algebra of processes XIWe have been thinking about categorical algebra for computer science now for 25 years. I want to discuss the problems of thinking seriously about two distinct disciplines, and how one might avoid the danger of falling between two stools.<br /><a name='more'></a><br /><br />There is clearly a problem at the level of exposition. We have tended to write for category theorists, but often in a slightly watered down form, so that the result is not interesting to category theorists, and at the same time obscure to computer scientists. Some of our articles are purely mathematical and I suspect category theorists are not aware that they are written to support our computer science investigations. I think the solution is to write two quite different types of article, one for category theorists, and the other for computer scientists.<br /><br />The more serious problem is at the level of research. We think that some apparent applications of category theory to computer science are superficial, arising from the casual recognition of some categorical constructions in areas already developed. In order to avoid this tendency we have always sought to be principally advised by the computer science problem rather than category theory. This has the effect that we sometimes fail to say obvious categorical things.<br /><br />As an example, in the recent ten posts on the algebra of processes I think we have given a distorted picture of the algebra by keeping our noses to the ground, and avoiding the more advanced categorical aspects.<br /><br />Here in brief is a more advanced point of view of the categorical algebra of processes introduced in the previous ten posts. We begin with an elementary topos C and consider the bicategory Span(C). This has lots of nice structure and properties. (i) It is a symmetric monoidal (from product) bicategory with commutative separable algebra structure on the objects. (ii) Composition and tensor preserve local colimits. (iii) In particular tensor and composition commute with pi_0 of local reflexive graphs.<br />For objects A,B we may consider Cospan(Span(A,B)). (iv) These categories are symmetric monoidal with tensor coming from sum, and with commutative separable algebra structure on the objects. As a result of (iv) and the main theorem of the paper <a href="http://www.tac.mta.ca/tac/volumes/15/6/15-06.pdf">www.tac.mta.ca/tac/volumes/15/6/15-06.pdf</a> there are expressions in the algebra Cospan(Span(A,B)) which represent labelled graphs. As a result of (iii), which is effectively a distributive law, the composition of local graphs in Span(C) is done as in the paper <a href="http://comocategoryarchive.com/Archive/people/walters/papers/pdf/1997-span-graph.pdf">Span(Graph)</a>.<br /><br />And so on. In this algebra we may describe classical sequential programming, and concurrent programming, as illustrated in the previous posts.<br /><br />We will write two versions of the lecture I intended for the CT2014, one for category theorists and one for computer scientists. I hope this will allow us to avoid falling between two stools.<br /><br /><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-81438073004573307302014-07-13T04:25:00.001-07:002014-07-13T05:08:59.193-07:00The seeds of homotopy type theoryI was recently very surprised to find a lecture (<a href="http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf">http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundations_files/2014_IAS.pdf</a>) of Vladimir Voevodsky explaining his motivation for working on homotopy type theory and the mechanization of mathematics.<br /><br />In brief the reason was that a group of well-known mathematicians working on "higher dimensional mathematics" since the 1980's were making errors which were not discovered for years, in one case not till more than 20 years after publication. Voevodsky refers to this situation as "outrageous".<br /><br /><a name='more'></a><br /><br />He concludes that "mathematics is in need of a completely new foundation" and in particular that the categorical foundations for mathematics are inadequate for its formalization.<br /><br />He is satisfied with Univalent Foundations and the proof assistant Coq.<br /><br />His key statement is that: "I now do my mathematics with a proof assistant and do not have to worry all the time about mistakes in my arguments or about how to convince others that my arguments are correct."<br /><br />I firmly disagree with the need to mechanize mathematics (though not with the usefulness of computer calculation).<br />In my view mathematicians are obliged to "convince others" by normal human discourse, however difficult and sometimes imperfect that process may sometimes be.<br /><br /><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-43929726539608942612014-06-27T06:47:00.001-07:002014-06-29T00:38:55.273-07:00Shag on a rockWell, it turns out that I cannot go to the Cambridge meeting, Category Theory 2014, after all. I will be having some treatment next week. I will try to put up on arXiv an account of the lecture I intended to give.<br /><br />In an earlier post I said that my talk seemed to have no connection with any other that I could see from the titles. However looking a bit more closely perhaps there are one or two related.<br /><a name='more'></a><br /><br />The title of Aleks Kissinger's talk, Finite Matrices are Complete for (Dagger) Multigraph Categories, deceived me until I saw his arXiv paper, arXiv:1406.5942, where I see that multigraph categories are what Aurelio Carboni and I introduced at the Louvain-la-Neuve conference in 1987 under the name well-supported compact closed (wscc) categories. We (Rosebrugh, Sabadini, Walters,...) have been working on them in the context of automata and processes since 1997 with the introduction of Span(Graph) and later Cospan(Graph). In fact my talk was to be about recent developments in that combined algebra.<br /><br />I am not sure if he kows the history which is partly described in our 2005 TAC paper "<a href="http://www.tac.mta.ca/tac/volumes/15/6/15-06.pdf">Generic commutative separable algebras and cospans of graphs</a>" (he calls separable algebras special Frobenius algebras). A more colourful story is told at Joachim Kock's <a href="http://mat.uab.es/%7Ekock/TQFT.html#history">page</a>. In view of what I think of the importance of wscc=multigraph categories I did suggest to Peter Selinger that he include them in his survey of graphical languages but he decided not to. However his suggestion that tangling of circuits might be important lead to our papers on those available<a href="http://www.tac.mta.ca/tac/volumes/26/27/26-27.pdf"> here</a> and<a href="http://arxiv.org/pdf/1307.5383v1"> here</a>. <br /><br />But Aleks might well be interested in that 2005 paper which proves that the free wscc=multigraph category on a monoidal graph M is the full subategory of Cospan(MonoidalGraphs/M) whose objects are discrete monoidal graphs/M. Actually the result in the 2005 paper is a special case but I pointed out <a href="http://comocategoryarchive.com/Archive/people/walters/papers/pdf/2007-calculating-colimits-compositionally-ct2007.ppt">CT2007</a> and <a href="http://ct2010.disi.unige.it/slides/Walters_CT2010.pdf">CT2010</a> that the proof easily extends. His talk is about deciding which equations are true in multigraph categories by looking at matrix categories.<br /><br />The other talk that I suspect has connections with my lecture is that of Bob Paré. This is a bit of a guess because I don't have any details, but also at <a href="http://ct2010.disi.unige.it/slides/Walters_CT2010.pdf">CT2010</a> I described a model of processes as 3x3 "spans of cospans" or "cospans of spans" with two structures as wscc=multigraph categories, and another level of cells between them, in which however there is an interchange which is not an isomorphic exactly because different amounts of synchronization occur in different composites. I did not have an abstract definition of this type of algebra, but I suspect Bob does since he speaks of "intercategories". I am very interested to see if my guess is correct, and if so what he does with them. My proposed talk was in fact to consider the simplification of cutting the corners, and finding distributive laws.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-34046654659389770602014-06-06T14:35:00.000-07:002014-06-10T05:50:33.103-07:00The algebra of processes X<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-hbUJS136Gu8/U5IzY0w7whI/AAAAAAAABuw/XIqzFHXPbvU/s1600/page1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-hbUJS136Gu8/U5IzY0w7whI/AAAAAAAABuw/XIqzFHXPbvU/s1600/page1.jpg" height="357" width="400" /></a></div><a name='more'></a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-J7gQKeH4YFU/U5Izbeq43wI/AAAAAAAABu8/31tRI0x0ej8/s1600/page2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-J7gQKeH4YFU/U5Izbeq43wI/AAAAAAAABu8/31tRI0x0ej8/s1600/page2.jpg" height="400" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">(There is a mistake in the next page, pointed out to me by Matias Menni. I should have said connected components preserves <i>products</i> of reflexive graphs - but the graphs I am interested in are graph objects in the category of labelled graphs - i.e. G, H, K are labelled graphs, and G • H is the product in labelled graphs - a pullback there .)</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-tMuiLFA4keA/U5IzbBHWQ6I/AAAAAAAABu4/_I5gKGrTeuA/s1600/page3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-tMuiLFA4keA/U5IzbBHWQ6I/AAAAAAAABu4/_I5gKGrTeuA/s1600/page3.jpg" height="400" width="280" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-ArRgGzurjJk/U5IzeuND74I/AAAAAAAABvI/dR6n_XLgwf4/s1600/page4.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-ArRgGzurjJk/U5IzeuND74I/AAAAAAAABvI/dR6n_XLgwf4/s1600/page4.jpg" height="400" width="291" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-sceaOTfEJl0/U5Ize9vhV0I/AAAAAAAABvM/KFDcEOFvueQ/s1600/page5.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-sceaOTfEJl0/U5Ize9vhV0I/AAAAAAAABvM/KFDcEOFvueQ/s1600/page5.jpg" height="169" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">What is the meaning of this second example? Remember that in the notion of process G, H, K, L are actually graphs, not just objects of a category. So we are considering the parallel of two processes each of which is a sequence of two processes. Consider a behaviour of such a system. Notionally we begin parallel behaviours in the automata G and K. Now there are three different things that can happen: (i) we remain in G while K exits and passes to L, (ii) we exit from both processes and pass to H in parallel with L, (iii) G exits and passes to H while the second process remains in K. And so on.</div><div class="separator" style="clear: both; text-align: left;">The whole system is a sequential structure of parallel processes.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I notice that the programme for Category Theory 2014 in Cambridge has been published. It includes my lecture on the algebra of processes. I must say that looking at the other talks I feel a little bit like a shag on a rock. The conference is very much concerned with category theory in mathematics and logic, and nothing resembling my talk.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">As I have mentioned earlier I would like very much to attend, but am still not sure whether I will be able to attend as I have health problems.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Regarding the changed form of the post: I am sure if you are really interested you will manage to decipher it. It was produced by the high tech means of writing on paper and taking a photograph with my phone.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-74685831300792941422014-06-04T05:30:00.002-07:002014-06-04T05:31:09.108-07:00Our house on the lake<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-8Et7ttiRMNQ/U48QYyfeUxI/AAAAAAAABuI/eJSledgEZnk/s1600/2012-12-28+10.03.15.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-8Et7ttiRMNQ/U48QYyfeUxI/AAAAAAAABuI/eJSledgEZnk/s1600/2012-12-28+10.03.15.jpg" height="300" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-5QsGRMOOuDs/U48RS4rDEWI/AAAAAAAABuQ/UbJdmspJWRs/s1600/2012-12-28+10.03.56.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-5QsGRMOOuDs/U48RS4rDEWI/AAAAAAAABuQ/UbJdmspJWRs/s1600/2012-12-28+10.03.56.jpg" height="300" width="400" /></a></div><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-1900396719055053412014-06-04T03:11:00.001-07:002014-06-04T03:12:33.520-07:00Steve Bloom's treeWhen Steve died in 2010 we said we would plant a tree in his memory at our mountain house. Here it is:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ANb-hlj2dvw/U47w1AgbS5I/AAAAAAAABt4/oY-lYYbWAPQ/s1600/2014-06-02+19.20.26.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ANb-hlj2dvw/U47w1AgbS5I/AAAAAAAABt4/oY-lYYbWAPQ/s1600/2014-06-02+19.20.26.jpg" height="300" width="400" /></a></div><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-38969438715051283712014-05-31T14:09:00.001-07:002014-05-31T21:58:45.715-07:00The algebra of processes IXI have had some problems so there has been a break in this series of posts. But also I was not quite sure how to explain what I called the distributive laws with the rather primitive mathematical resources of my blogspot. I should really find out how to incorporate TeX.<br /><a name='more'></a><br /><br />To go back to the question of distributive laws in the CospanSpan algebra, the aim of such laws is as follows: Call an expression in the sequential cospan operations a Sigma expression and an expression in the parallel span operations a Pi expresssion. Then it should be possible to rewrite a Pi Sigma expression as a Sigma Pi expression (evaluating to the same process). The fact that this is possible is why I spoke in the abstract of a "ring of processes".<br /><br />The example G•(H;K)=(G<sub>0 </sub>•H);(G<sub>1</sub>•K) from the last post was of this type. However the general problem of rewriting Pi Sigma → Sigma Pi is more complicated. I think I will discuss only the special case of rewriting a product of Sigma expressions.<br /><div><br /></div><div>It will take me a little time to explain why Pi distributes over Sigma but the key fact is that the connected components of the product of two reflexive graphs = the product of the connected components of each graph. If you check this you will see that it is closely connected with the idea of asynchrony. </div><div>(This is usually referred to as a property of reflexive coequalizers.)</div><div><br /></div><div>Let's look at evaluating a Sigma expression. I am interested in the central object of the process. </div><div><br /></div><div>Caution: in this theory there are spans, cospans and graphs everywhere, playing different roles. We have to keep things separate. So we are now going to ignore the fact that the central object of a process is a graph - just think of it as an object.</div><div><br /></div><div>Remember that composing two cospans the new central object is a pushout of a span between the centres of each cospan. But more generally the evaluation of a Sigma expression G is a sum of the central objects of the cospans involved (excluding the constants) G1, G2, ..., Gn say, quotiented by spans Gi,j between them. But the Gi's and the Gi,j's give a graph with vertex set G1+G2+...+Gn and edge set the sum of the spans. We call this graph e(G). The evaluation of the expression is the set Q of connected components of this graph e(G). Notice that if we add the identity spans to each object Gi the graph is reflexive and the quotient is unchanged.<br /><br />Now consider another Sigma expression H with central objects of the cospans H1, H2, ... , Hm, spans Hk,l and evaluation R=connected components of e(H).<br /><br />Consider then the Pi Sigma expression GxH. Its evaluation is<br />QxR = connected components e(G ) x connected components e(H)<br /> = connected components (e(G) x e(H)) since the graphs are reflexive.<br /><br />But the graph e(G) x e(H) has vertex set the sum of the Gi x Hk, and edge set the sum of the spans Gi,j x Hk,l.<br /><br />With a little effort this can be seen to be equivalent to a Sigma Pi expression as promised.<br /><br />Remark: I know this is very brief. I will give two examples in the next post which may make the matter clearer.</div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-21345065604998071772014-05-15T08:11:00.000-07:002014-05-15T22:58:01.096-07:00The algebra of processes VIII - the distributive lawsI want to first say something about the abstract setting. In the arXiv paper (arXiv:0909.4136 ) we considered a more complicated notion of process (mentioned in the second post of this series) with nine graphs, and we described many operations. We now believe that this definition was too general, and instead a process should consist of five graphs A,B,X,Y, G and four morphisms δ<sub>0</sub>: G → A, δ<sub>1</sub>: G → B, γ<sub>0</sub>: X → G, γ<sub>1</sub>: Y → G as we have been discussing in these posts.<br /><br /><a name='more'></a><br /><br /><br />What I will describe is essentially what we have written in the paper <a href="http://comocategoryarchive.com/Archive/people/walters/papers/pdf/2000-iwim.pdf">A Formalization of the IWIM Model, Coordination 2000, SLN in CS.</a> That paper was however written for a computer science conference and was not sufficiently abstract, and was without data types.<br /><br />I have said that such a thing is an arrow in Cospan(Graph/AxB) and the operations there are composition (denoted G;H)and sum (denoted G+H).<br /><br />But such a thing also belongs to another category. Let T be the three object category which is a model of a cospan (that is, has a central object and two arrows one from each of the other objects into the central object). The a process is also and arrow in Span(Graph <sup>T</sup>) in the following way:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-zGdxyXCSKN8/U3TVKI0qoGI/AAAAAAAABsc/0h0yKRT2YPQ/s1600/square.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-zGdxyXCSKN8/U3TVKI0qoGI/AAAAAAAABsc/0h0yKRT2YPQ/s1600/square.png" /></a></div>The central arrows are the deltas and gammas; the lefthand side and the right hand side consist of identities. The whole diagram is a span of cospans, that is, an arrow in Span(Graph <sup>T</sup>).<br />The composition in this category will be denoted G•H and the tensor G⊗H.<br /><br />Now the parallel operations distribute over the sequential ones in a particular way. I describe one:<br />Consider the processes G with left parallel interface A, right parallel interface B, and processes H,K with left parallel interface B, right parallel interface C. Let G=G<sub>0</sub>;G<sub>1</sub> be the tabulation of G considered as a cospan (of spans) (that is, G<sub>0</sub> is the half of G pointing backward; G<sub>1</sub> is the part pointing forward).<br /><br /> Then G•(H;K)=(G<sub>0 </sub>•H);(G<sub>1</sub>•K).<br /><br /><br />I will try to say something of the meaning of this law in the next post.<br />Suffice it to say here that this law (and the other similar ones) allow the flattening of a hierarchical system.<br />See the Coordination paper above for an example.<br /><br />Another paper with the span-cospan algebra in a probabilistic context is:<br />Luisa de Francesco Albasini, Nicoletta Sabadini, Robert F. C. Walters: The compositional construction of Markov processes II. RAIRO - Theor. Inf. and Applic. 45(1): 117-142 (2011)<br />An early version available at arXiv:1005.0949v1.<br />The operations corresponding to the ones we have discussed are called parallel, parallel-with-communication, local sum, and local sequential.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-2784017663203858602014-05-04T23:42:00.002-07:002014-05-15T06:26:26.894-07:00The algebra of processes V<div dir="ltr">SEQUENTIAL PROGRAMMING in Cospan(Graph) continued:</div><div dir="ltr">We are now not interested in the alphabet A, but just in writing sequential programs whose input/output behaviour yield partial functions, so we take A to be a one letter alphabet and ignore it.</div><div dir="ltr"><br /></div><div dir="ltr">There is an operation I have not introduced which strictly speaking is not in Cospan because it is a little bit of parallelism, which however does not involve any parallel communication. We will see later that it is an operation which involves Span and Cospan.<br /><a name='more'></a><br /><br /></div><div dir="ltr">Consider a cospan of graphs X ← G → Y where the states of G have a given finite sum decomposition as U<sub>1</sub>+U<sub>2</sub>+...+U<sub>n</sub>, which induces (as we have seen before) finite sum decompositions of X and Y as (say) X<sub>1</sub>+..+X<sub>m</sub> and Y<sub>1</sub>+...+Y<sub>k</sub>, and a decomposition of the vertices of G into spans G<sub>i,j</sub>.<br /><br />Now given a set Z let l(Z) be the graph with vertex set Z and one loop at each vertex.<br /><br />Then we can form a new span with set of states ZxU<sub>1</sub>+ZxU<sub>2</sub>+...+ZxU<sub>n</sub>, with left interface ZxX<sub>1</sub>+ZxX<sub>2</sub>+...+ZxX<sub>m</sub>, right interface ZxY<sub>1</sub>+ZxY<sub>2</sub>+...+ZxY<sub>k</sub>, with spans being l(Z)xG<sub>i,j</sub>.<br /><br />If you write out this span explicitly you will see that you need the distributive law isomorphism of products over sums to define the new cospan.<br /><br />We will call this new cospan ZxG.<br /><br />What is the meaning of ZxG? Why is it needed in sequential programming? It is just the fact that you need to keep variables, Z in this case, in memory while you perform a behaviour of G.<br /><br />I will show you how this comes up in describing the constructions "if then else" and "while".<br /><br />(Before I forget there are two more constants you need for sequential programming (i) the diagonal function (copy) Z →ZxZ considered as a cospan and the function (forget) Z →I as a cospan.)<br /><br />I will describe "if then else", "while" and "addition" just as pictures and let you think about why they are expressions in Cospan(Graph) and why they do what is implied by the names. Of course you will have realized by now that these pictures are formalization of flow charts in Cospan(Graph).<br />Note: In the pictures the lines indicate identification of state.<br />(The pictures area little crude and perhaps I should improve them.)<br /><br />IF P THEN DO F ELSE G where P is a cospan from X to 1+1 and F and G are cospans from Y to Z.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-5pxvM5WHM9E/U2c5DIFP6VI/AAAAAAAABrM/XdtTJBoMADw/s1600/ifthenelse.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-5pxvM5WHM9E/U2c5DIFP6VI/AAAAAAAABrM/XdtTJBoMADw/s1600/ifthenelse.png" height="188" width="400" /></a></div><br /><br /><br />WHILE P DO F where P is a cospan from Y to 1+1 and F is a cospan from Y to Y.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8JgUZO6VzrA/U2dAGlbQXKI/AAAAAAAABrc/umo9rGAwzB0/s1600/while.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8JgUZO6VzrA/U2dAGlbQXKI/AAAAAAAABrc/umo9rGAwzB0/s1600/while.png" height="158" width="400" /></a></div><br /><br /><br />ADDITION: a program for computing addition from the cospans predecessor and successor described in an earlier post.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Afdo_8sspU4/U2cyujK9CII/AAAAAAAABq8/_ZDR3NbpntU/s1600/add.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Afdo_8sspU4/U2cyujK9CII/AAAAAAAABq8/_ZDR3NbpntU/s1600/add.png" height="175" width="400" /></a></div><br /><br /><br /></div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-11508631708562237312014-05-15T06:03:00.001-07:002014-05-15T06:25:39.483-07:00The algebra of processes VIIPARALLEL PROGRAMMING in Span(Graph)<br /><br />Last post I promised to give an example of a parallel program. Here is a very simple one which is the parallel composite of two components both having state space N+N+N (N= natural numbers).<br />I have distinguished the different summands by subscripts to make the description of the composite clearer:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-dD09FRTahgQ/U3SjmrJNxHI/AAAAAAAABsE/G490zc3_pdo/s1600/concur.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-dD09FRTahgQ/U3SjmrJNxHI/AAAAAAAABsE/G490zc3_pdo/s1600/concur.png" height="143" width="400" /></a></div><br /><a name='more'></a>The first component is a span from 1 to {a,ε}, the second from {a,ε} to 1. The initial value I have in mind is a pair of numbers in N<sub>1</sub>xN<sub>1</sub>. The left-hand process calculates the function f, then does a test N<sub>2</sub>->N<sub>1</sub>+N<sub>3</sub>. If the behaviour arrives in N<sub>1</sub> then it repeats. If the behaviour arrives in N<sub>3</sub> it waits until also the second component (which has meanwhile been calculating g) arrives in N<sub>3</sub>. They then both make a synchronized action returning to N<sub>1</sub>. And so on.<br /><br />It is a rather simple abstract example, but already the state space of the composite has 9 summands N<sub>i</sub>xN<sub>j</sub>. (It would have been nice to program the dining philosophers and their thoughts but I haven't the patience here.)<br /><br />Let me at least draw the graph of the composite, including the summands reachable from N<sub>1</sub>xN<sub>1</sub>. (I wont bother to specify the precise spans between the summands.)<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-txUa78ANPdE/U3S69eCx00I/AAAAAAAABsQ/fgYlG5f8xDw/s1600/compconcur.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-txUa78ANPdE/U3S69eCx00I/AAAAAAAABsQ/fgYlG5f8xDw/s1600/compconcur.png" height="289" width="640" /></a></div>Notice that the graph is non-deterministic at N<sub>3</sub>xN<sub>3</sub> - there is no guarantee that both processes will not idle together in this summand indefinitely (this was why I said in the arxiv paper that eventually (in the Italian sense) they will proceed to synchronize on a and return to N<sub>1</sub>xN<sub>1</sub>). To make the system deterministic would require more programming.<br /><br />I forgot to say that the programming of each process (component) is done sequentially in Cospan(Graph/{a,ε}).<br /><br />Another remark: with the finiteness assumptions we have made processes can only communicate by a finite number of signals. To pass (or share) data would require different finiteness assumptions which would allow the alphabets to be infinite.<br /><br />Next post I would like to discuss the abstract context and describe the distributive laws.<br /> (It seems now that I will not be able to attend the conference for which I wrote the original abstract so I guess these blog posts have been useful after all.)Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-68579282577415712802014-04-22T03:14:00.003-07:002014-05-10T23:09:04.205-07:00On the algebra of processesThe following is an abstract submitted to a conference (only half a page was permitted). I would like expand a little here on the matter in the next day or so.<br /><br />--------------------------------------------<br />Abstract: On the algebra of processes<br /><a name='more'></a><br /><br />Process algebras were introduced by Hoare, Milner and Bergstra-Klop to extend the theory of sequential computation to include parallel and concurrent computation. The basic idea of Milner was to add a parallel operation symbol to the successful algebra of Kleene of regular languages. Notice that Kleene's algebra is an algebra of behaviours rather than of programs/automata/machines. Milner's algebra inherits that fact, and is more concerned with observation of systems rather than systems themselves.<br /><br />In 1997 Katis, Sabadini and I proposed the category Span(Graph) as a parallel algebra of automata, and (in 2000) Cospan(Graph) as the associated sequential algebra. The ideas were successively developed with collaborators, including R. Rosebrugh. In 2004 at the meeting in Vancouver I proposed with Sabadini the more abstract notion of a process (in a topos <b>C</b>) with parallel and sequential interfaces, namely an object G together with four objects A, B, X, Y and four arrows d0:G->X, d1: G->Y, g0:A->G , g1:B->G. Such a diagram may be regarded as a span of cospans or a cospan of spans. It exists as an arrow in two monoidal categories: Span((A+B)\<b>C</b>) (with parallel operations), and Cospan(<b>C</b>/(XxY)) (with sequential operations). Both of these categories have commutative Frobenius structures on the objects. Processes may be combined in parallel in the first category and in sequence in the second. Alternating these operations leads to hierarchical systems.<br /><br />We will describe recent developments including distributive laws between the parallel and sequential operations, which permit the flattening of hierarchy. In fact, the combined algebra forms a kind of <i>ring</i> of processes.<br />-------------------------------------Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-63713665995044071832014-04-29T14:26:00.002-07:002014-05-10T23:08:38.643-07:00On the algebra of processes III have decided to write a sequence of short posts explaining the ideas behind the abstract I recently posted. I forgot to mention in the abstract that there is a 2009 paper on Arxiv by Luisa de Francesco Albasini, Nicoletta Sabadini and me with some more explanation of our ideas (arXiv:0909.4136 ). It was a summary of thoughts over the summer holiday in Santa Caterina Valfurva that year and was never published. It lacks the distributive laws, and promises to later "fill out details of matters sketched" which promise I hope to partially fulfill in these posts.<br /><a name='more'></a><br /><br />In that paper the definition of process is a bit different to the one in the abstract:<br /><div class="separator" style="clear: both; text-align: center;"><img border="0" src="http://2.bp.blogspot.com/-CP5lVE9gvdc/U2AGoIKkDwI/AAAAAAAABos/XI5N1yxhIfI/s1600/defn3.png" height="142" style="cursor: move;" width="400" /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">The definition in the abstract is an interesting limited special case.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">An easier reference to this work (but with probabilistic case in mind) are the<a href="http://ct2010.disi.unige.it/slides/Walters_CT2010.pdf"> slides </a>to my talk at CT 2010 in Genova.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">But let me begin by starting with something much simpler:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">THE ALGEBRA OF SEQUENTIAL PROCESSES AS COSPANS(GRAPH/A): Finite state case</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">First let's look at the finite state case, non-deterministic finite state automata (NFA) with Kleene operations. An NFA is a graph labelled in an alphabet A, with a specified initial state and a set of final states. The type of graph allowed is more limited than we have in mind, namely it is a set Q of states and a subset of QxQ. An example with alphabet {a,b,c} initial state q0, final states q2,q3,q4</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-pe2XyYWImWc/U2APJUa43CI/AAAAAAAABo8/IXurLrM5tRA/s1600/nfa.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-pe2XyYWImWc/U2APJUa43CI/AAAAAAAABo8/IXurLrM5tRA/s1600/nfa.png" height="222" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">Now looking at this picture you can see that we can make a small generalization of NFA by taking general graphs which have a set of vertices, a set of edges and domain and codomain functions, and taking not just initial and final states but a function into the states on the left from a finite set, and a similar function into the states on the right - that is a cospan of graphs labelled in A between discrete graphs.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Putting NFA's in Cospan(Graph/A) has an immediate advantage. Cospan(Graph/A) has a composition by pushout, a symmetric monoidal structure from disjoint sum, a compact closed structure which permits feedback, and a Frobenius algebra structure on the objects.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Of course there are Kleene operations on NFA's which have a similar intent but are much less canonical. For example, every automaton can be constructed from single arrow automata using the operations of Cospan(Graph/A). This is NOT true for Kleene operations. More, Cospan(Graph/A) between discrete graphs is free symmetric monoidal category with ... on the single arrow cospans. These facts together with a compositional proof of Kleene's theorem on regular languages can be found<a href="http://www.tac.mta.ca/tac/volumes/15/6/15-06.pdf"> here</a> and <a href="http://arxiv.org/abs/0712.2525">here</a>.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">One might object that to prove Kleene's theorem there is no need to go into these categorical details, and that traditional NFA's give the proof much more efficiently. (One could give essentially the same elementary proof using cospans - there is a complication - one needs to keep track of the languages between any pair of entry/exit states, not just between initial and final.) </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">However if one is interested in NFA's as control structures in sequential programming (instead of as language recognizers) there is a real advantage in considering cospans as I hope to show in the next post. </div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-46714104737946203062014-04-30T05:13:00.001-07:002014-05-10T23:08:04.478-07:00On the algebra of processes IIITHE ALGEBRA OF SEQUENTIAL PROCESSES AS COSPANS(GRAPH/A): Infinite state case<br /><br />Let us now assume that the graphs may be infinite. We will however make some finiteness assumptions<br />which correspond to discrete aspects of the processes. <br /><br />Consider a cospan of graphs labelled in alphabet A, X ← G → Y. First we will take the alphabet A to be finite. We could make a less drastic assumption but I want to concentrate attention on X, Y, and G.<br />A consequence of the fact that A is finite is that G breaks up into subgraphs G<sub>a</sub>, one for each element a of A, but all with the same vertex set vert(G).<br /><a name='more'></a><br /><br />In considering infinite state processes with discrete control we assume that transitions occur at threshold values. The assumption is that the set of states (or vertices) of G breaks up into a finite disjoint union of sets,<br /> for example vert(G)=U<sub>1</sub>+U<sub>2</sub>+...+U<sub>n</sub>.<br /><br />What happens to a graph G when you break up its vertex set as a disjoint union U<sub>1</sub>+U<sub>2</sub>+..+U<sub>n</sub>? For each i,j you get the set of edges from U<sub>i</sub> to U<sub>j</sub> which we shall call G(i,j). There are domain and codomain functions U<sub>i</sub> ← G(i,j) → U<sub>j</sub>; that is a span from U<sub>i</sub> to U<sub>j</sub>. So a graph breaks up into a matrix of spans. This is just the fact that Span(Sets) has direct sums and a graph is an endo-arrow in Span(Sets).<br /><br />What I have said about the graph G also happens to each graph G<sub>a</sub>.<br /><br />From this matrix of spans we can draw a finite graph with n-vertices U<sub>1 </sub>, U<sub>2</sub>, ..., U<sub>n</sub>, and edges labelled by spans of sets and letters of A. (We omit edges labelled by empty spans.)<br /><br />I haven't spoken of the sets X and Y. The sets also break up (by extensivity) X=X<sub>1</sub>+X<sub>2</sub>+...+X<sub>m</sub>, Y=Y<sub>1</sub>+Y<sub>2</sub>+...+Y<sub>k</sub>. The functions X → G and Y → G also break up into functions, and we make one further assumption that all the parts of the functions X → G and Y → G are the identity function. <br /><br />At the end of these assumptions a cospan of labelled graphs between discrete graphs looks like a labelled version of the finite cospans we considered in Processes II, that is something like:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-UiLoo-Baz-Y/U2IE3s5OLjI/AAAAAAAABpo/ZjRGg9OPqG4/s1600/autominf.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-UiLoo-Baz-Y/U2IE3s5OLjI/AAAAAAAABpo/ZjRGg9OPqG4/s1600/autominf.png" height="165" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>where the U's are possibly infinite sets, R, S,T,L and M are spans of sets, a and b are labels.<br /><br />What this has to do with simple (Turing equivalent) sequential programming I will describe in the next post.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-28386148108710798312014-05-02T02:13:00.004-07:002014-05-10T23:07:41.249-07:00On the algebra of processes IV<div class="separator" style="clear: both; text-align: left;">In this post I want to begin describing simple sequential programming in terms of Cospan(Graph/A). First let's look at two examples of (infinite) cospans:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-21lpnMvt1K8/U2NhprbKYMI/AAAAAAAABqM/Axr0Vvft9cc/s1600/predsucc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-21lpnMvt1K8/U2NhprbKYMI/AAAAAAAABqM/Axr0Vvft9cc/s1600/predsucc.png" height="116" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">N is the natural numbers. The span "error" is the partial function which takes 0 to error. The span "pred" is the partial function which takes n to n-1 if n is positive. The span "zero" is the function on one point that takes the value 0. The span "succ" is the successor function.</div><a name='more'></a><br /><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">(As a graph (instead of split up into spans) predecessor has vertices N+1+N and edges N. The edge 0 goes from 0 of the first component of N+1+N to the single element of 1. The edge n+1 goes from n+1 in the first component of N+1+N to n in the third component.)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The composite of predecessor followed by successor is the following cospan, a behaviour of which takes n on the left to n-1 or error inside and then to n on the right (so the input output behaviour is the identity function of N).</div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-AZn-jTUm8TA/U2PLZTr08TI/AAAAAAAABqc/Xy8HDs8NUyw/s1600/prsu.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-AZn-jTUm8TA/U2PLZTr08TI/AAAAAAAABqc/Xy8HDs8NUyw/s1600/prsu.png" height="162" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The composite of successor followed by predecessor is the following cospan, a behaviour of which takes n on the left to n+1 inside and then to n on the right, and takes the one element of 1 to 0 inside and then the one element of 1 on the right (so the input output behaviour is the identity function of 1+N ). </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Uhqez5xS0qI/U2PLiPOcEqI/AAAAAAAABqk/bjV0X0yIrsI/s1600/supr.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-Uhqez5xS0qI/U2PLiPOcEqI/AAAAAAAABqk/bjV0X0yIrsI/s1600/supr.png" height="160" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I will describe programming in Cospan(Graph/A) mostly in terms of pictures but each program is an expression in the operations of composition, tensor (sum) and the constants of Cospan, and one extra operation to be introduced. I haven't mentioned the constants but they are as follows (states only, no transitions). </div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-mVlF0_yClnE/U2NO6Z3Y8OI/AAAAAAAABp8/OGHQP-ZGJGE/s1600/constants.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-mVlF0_yClnE/U2NO6Z3Y8OI/AAAAAAAABp8/OGHQP-ZGJGE/s1600/constants.png" height="180" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">The last two are derived from the others by composition. These constants allow the identification of states when composed with other cospans (by pushout).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">It looks like I wont get to programming today.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In the next post I will describe the constructions "while" and "if then else". Together with the the data type natural numbers described above this will imply that Cospan(Graph/A) as a programming language is Turing complete.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">For other data types described in a way appropriate for Cospan(Graph/A) see my book.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">--------------</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Notice that I haven't mentioned recursion. In my view recursion comes after having an algebra of processes in the consideration of equations in the algebra.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The closest thing in the literature that I know to the material I am describing is the work of Elgot and collaborators who however (i) have algebras of behaviours (like Kleene) rather than of machines/programs,</div><div class="separator" style="clear: both; text-align: left;">and (ii) separate the memory from the control. It is interesting to read Dana Scott's view of Elgot's work in his foreword to Elgot's selected papers. Certainly Scott's view has till now received more attention in the categorical community interested in programming (though not perhaps<a href="http://www.personal.psu.edu/t20/fom/postings/9802/msg00021.html"> his view on 2-categories</a> reported by Vaughan Pratt).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I hope it will be clear from the next lecture that the Kleene theorem and the Bohm-Jacopini theorem are essentially the same theorem - namely that any graph (automaton, goto) may be given by an exression in Cospan(Graph/A) in terms of elementary graphs.</div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-63862987895455312632014-05-07T04:29:00.000-07:002014-05-10T23:06:43.928-07:00The algebra of processes VI<div dir="ltr">The PARALLEL ALGEBRA of PROCESSES Span(Graph)</div><div dir="ltr">We have just described Cospan(Graph/A) as a sequential algebra. However if we take A × B instead of A, a cospan in Graph/(A×B) from X to Y consists of four morphisms of graphs: X → G, Y → G, G → A, and G → B, where here A and B are thought of as graphs with a single vertex, and edge sets being A and B respectively.<br /><br /></div><div dir="ltr">But this is exactly an example of a process as defined in the first post of this series. X and Y are the sequential interface, A and B the parallel interface. Such a process is in Cospan(Graph/(A × B)), the sequential algebra, but also in Span((X+Y)\Graph), the parallel algebra.<br /><a name='more'></a><br /><br /></div><div dir="ltr">To concentrate on the parallel aspects I will assume here that X= Y = 0 and consider only Span(Graph).<br /><br />In the case that A,B and G are finite the algebra is discussed in many of our papers, starting with the 1997 AMAST paper. A copy can be found here: <a href="http://comocategoryarchive.com/Archive/people/walters/papers/pdf/1997-span-graph.pdf">http://comocategoryarchive.com/Archive/people/walters/papers/pdf/1997-span-graph.pdf</a>.<br /><br />What I would like to discuss here is the case when G is infinite. We assume that A and B are finite and, as before, that the vertex set of G is a (given) finite sum of sets U<sub>1</sub>+U<sub>2</sub>+...+U <sub>n</sub>. As before the graphs G<sub>a,b</sub> <br />which project to a in A, and b in B, break up into spans G<sub>a,b;i,j </sub> (i,j=1,2,...,n). <br />We can draw the span G as a box containing a graph with vertices U<sub>i </sub>and for each a,b in A × B an edge from i to j labelled G<sub>a,b;i,j</sub>. As usual we omit edges for which the span is empty. We draw ports on the component, one on the left labelled A, and one on the right labelled B (if A or B are given as products we draw multiple ports.)<br /><br />Next I want to describe composition in Span(Graph) and give an example or two. Essentially the algebra is that of our papers for finite span graph but states labelled by sets, and edges labelled by spans of sets.</div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-65185669256453291842014-04-20T00:11:00.002-07:002014-04-20T00:12:20.588-07:00Villa Carlotta, Lake ComoTo visit in late April.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-_1nnbw8jmoc/U1Nydfi4pEI/AAAAAAAABnk/9iSsph2fogI/s1600/2014-04-18+15.32.24.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-_1nnbw8jmoc/U1Nydfi4pEI/AAAAAAAABnk/9iSsph2fogI/s1600/2014-04-18+15.32.24.jpg" height="320" width="240" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-LzNVoe34B6Q/U1Nyg6EF4sI/AAAAAAAABnw/jC4tM1rHvco/s1600/2014-04-18+16.14.58.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-LzNVoe34B6Q/U1Nyg6EF4sI/AAAAAAAABnw/jC4tM1rHvco/s1600/2014-04-18+16.14.58.jpg" height="320" width="240" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Pg5-YNVaLL0/U1NydJk-2nI/AAAAAAAABng/cAorwcabalc/s1600/2014-04-18+16.16.08.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Pg5-YNVaLL0/U1NydJk-2nI/AAAAAAAABng/cAorwcabalc/s1600/2014-04-18+16.16.08.jpg" height="320" width="240" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-g4ZA9poEUJk/U1NyvRvRwrI/AAAAAAAABn4/FrCD7MOYleQ/s1600/2014-04-18+16.17.29.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-g4ZA9poEUJk/U1NyvRvRwrI/AAAAAAAABn4/FrCD7MOYleQ/s1600/2014-04-18+16.17.29.jpg" height="320" width="240" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-o4neaW5RvTs/U1NyxxsxnrI/AAAAAAAABoA/MUSPMlriv3o/s1600/2014-04-18+16.23.01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-o4neaW5RvTs/U1NyxxsxnrI/AAAAAAAABoA/MUSPMlriv3o/s1600/2014-04-18+16.23.01.jpg" height="240" width="320" /></a></div><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-90943028379593822522014-04-14T08:00:00.002-07:002014-04-15T03:56:00.314-07:00How to coil a hoseI just bought a hose to water the newly planted roses at our mountain house. Here is one Nicoletta planted last year:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-3kbR7zumXNo/U0vsfFKDzgI/AAAAAAAABk0/pZzgq3mzdas/s1600/2013-06-23+19.26.41.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-3kbR7zumXNo/U0vsfFKDzgI/AAAAAAAABk0/pZzgq3mzdas/s1600/2013-06-23+19.26.41.jpg" height="240" width="320" /></a></div><a name='more'></a><br /><div class="separator" style="clear: both; text-align: left;"></div><a href="https://www.blogger.com/null" name="more"></a>The hose came rolled so:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ebDvcDXoalc/U0vyt9t1K0I/AAAAAAAABlE/Hxc61J6quT0/s1600/2014-04-14+16.32.12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ebDvcDXoalc/U0vyt9t1K0I/AAAAAAAABlE/Hxc61J6quT0/s1600/2014-04-14+16.32.12.png" height="150" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">After unrolling it looked like this:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-lwdo3wrvtj0/U0zzCQALNLI/AAAAAAAABm8/zkkFNtMpcjE/s1600/2014-04-15+09.56.44.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-lwdo3wrvtj0/U0zzCQALNLI/AAAAAAAABm8/zkkFNtMpcjE/s1600/2014-04-15+09.56.44.png" height="117" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">and no water issued. This is a well-known problem in <a href="http://en.wikipedia.org/wiki/Mountaineer's_coil">mountaineering</a>, sailing, in the <a href="http://en.wikipedia.org/wiki/Over/under_cable_coiling">storing of electrical cables</a> etc.</div><div class="separator" style="clear: both; text-align: left;">They recommend that you coil rope/hoses/cables either like:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-EuGIkwFrkWA/U0v6hI9s0bI/AAAAAAAABls/EjvQgZcDAXw/s1600/2014-04-14+17.08.49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-EuGIkwFrkWA/U0v6hI9s0bI/AAAAAAAABls/EjvQgZcDAXw/s1600/2014-04-14+17.08.49.png" height="128" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: left;">or like this:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-BBVtZg5zcKY/U0v0KQQh-II/AAAAAAAABlc/NvoKLKGTMhU/s1600/2014-04-14+16.31.23.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-BBVtZg5zcKY/U0v0KQQh-II/AAAAAAAABlc/NvoKLKGTMhU/s1600/2014-04-14+16.31.23.png" height="202" width="320" /></a>.</div><div class="separator" style="clear: both; text-align: left;">A simple calculation in the braided monoidal category of tangles shows that these expressions are equal to the identity braid (whereas the expression of the first is the composite of 2n twists).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">You may have noticed that I have not blogged for months. That is because the family has had a sequence of serious illnesses over the last two years. The last was the most stressful.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I hope we are back to a period of tranquility.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://lh3.ggpht.com/-E-_TI05FAWU/U0zca-o_VLI/AAAAAAAABmo/Ebbg33K1xwA/s1600/2014-04-15%25252008.42.05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"> </a> </div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-65775192272344085922013-12-11T00:52:00.001-08:002013-12-11T01:37:17.974-08:00Old posts: Max Kelly 2007With the closing of our offices in via Carloni the <a href="http://dscpi.uninsubria.it/">web server</a> of the old department was offline for a time and I suspect will not be reliable in future. I have decided to transfer material from the server to this blog or to the <a href="http://comocategoryarchive.com/Archive/">Como Category Archive</a>. <br />The first item is a post made when Max Kelly died in 2007.<br /><br /><a name='more'></a><br /><br />---------------------------------------<br />Max Kelly died on the 26th January 2007. I have known Max since, as a graduate student in Canberra of Bernhard and Hanna Neumann, I travelled to Sydney to see him one day about my developing interest in Category Theory. It must have been 1968.<br /><br />After I returned from Chicago in 1970 to a position at the University of Sydney, I attended with Don Cartwright a course of Max's at the University of New South Wales and then in 1971 Max went on leave, Ross came to Macquarie, we started the Category Seminar. The seminar, first called the Sydney Category Seminar, and now the Australian Category Seminar, with Max and Ross, and with the visitors who came, and the students, was the basis of my research life. I had many struggles with Max over the years. I am very pleased that at the end we were writing a paper together with Richard Wood and Aurelio Carboni. <br /><br />I was deeply saddened by his death. <br /><br />I include some of the letters written to the Categories mailing list after his death was announced. <br />-----------------------------------------------<br />From: Bill Lawvere <br />To: categories@mta.ca <br />Date: sabato 27 gennaio 2007 16.31.58 <br /><br />I am deeply saddened by the loss of Max. In our field he was a rock of reliability and a fountain of imagination. I will miss my lively, warm, kind, and sometimes mischievous friend. <br /><br />Bill Lawvere <br />---------------------------------------------<br />From: Eduardo J. Dubuc <br />To: categories@mta.ca <br />Date: domenica 28 gennaio 2007 17.39.02 <br /><br />I am deeply saddened by the death of Max Kelly. When I saw the subject in Ross posting, and before opening the message, my heart already felt anguish. I am more saddened with his loss that what I have been by the loss of any other member of our category theory community. In fact, I loved Max. I admired his courage, his independence of thought, his lack of hypocrisy, and I loved him simply by the way he was. I am proud that he considered me his friend. For me, our community is not the same without Max. <br />--------------------------------------------<br />From: Richard Wood <br />To: categories@mta.ca <br />Date: lunedì 29 gennaio 2007 21.11.43 <br /><br />Max Kelly, a master of coherence <br /><br />We would like to add to Bill and Eduardo's letters also our feelings of deep sadness at Max's death. <br /><br />Max Kelly's Last Work <br />===================== <br /><br />In due course Max's last work will appear in a four-author paper. While it is not usual for coauthors to divulge who contributed what to a paper the present circumstances seem to warrant such, as an appreciation of Max's extraordinary talents and tenacity. Carboni, Kelly, Walters, and Wood, [CKWW] have for some time been extending the Carboni and Walters notion of `cartesian bicategory' to the general case of bicategories that are not necessarily locally ordered. <br /><br />A cartesian bicategory B ultimately has a tensor product, a pseudofunctor *:BxB--->B that is naively associative and unitary. It is natural to ask whether such (B,*) is a monoidal bicategory, in other words a one-object tricategory in the sense of [Coherence for Tricategories; Gordon, Power, and Street]=[GPS]. Early in 2005 [CKWW] had shown that _if_ a bicategory A with finite `products' -x- and 1, in the bilimit sense, has (A,x) a monoidal bicategory then a cartesian bicategory B has (B,*) monoidal. In the course of polishing the paper it came to Max's attention that nobody had _proved_ the <br /><br />Theorem: A bicategory with finite products is monoidal.<br /><br />Nobody doubted the truth of this. In fact, experts in higher dimensional category theory said that if it were not true then the definition of tricategory is wrong! But when you consider the rather large amount of data that must be assembled and the many equations (some merely implicit in words such as pseudonatural and modification) that must be verified from the apparently rather weak universal property of finite products in the bilimit sense, it seemed like a rather thankless task to write out the details. <br /><br />This was to Max a completely unacceptable state of affairs. If nobody doubts the statement then it must be possible to find a good proof! Now Max had no intention of redrawing any of the diagrams in [GPS]. <br /><br />For the last few years Max, with little central vision left as a result of macular degeneration, has been doing Mathematics using an 8-fold magnification monitor. This allowed him to see only a few square centimetres of a page at a time. Many [GPS] diagrams consume an entire page. His proof, that we were privileged to receive in the last few weeks, has _no_ diagrams (though doubtless we will incorporate a few in a publishable version of the paper). Max attributed the key idea in his proof to Ross Street. <br /><br />Briefly, this is how it goes: For X a finite family of objects in the bicategory A, write A(X) for the bicategory of product cones over X. Thus an object of A(X) consists of an object b of A, together with a family of arrows p_i:b--->X_i, such that for all a, the induced functor A(a,b)--->Pi A(a,X_i) is an equivalence of categories. <br />Lemma: !:A(X)--->1 is a biequivalence <br />(Recall that to say B--->1 is a biequivalence is to say that: i) there is an object b in B ii) for any objects c and d in B, there is an arrow f:c--->d iii) for any arrows g,h:c--->d in B, there is a unique 2-cell g--->h. It follows that in a bicategory biequivalent to 1, every arrow is an equivalence and every 2-cell is an isomorphism.) <br /><br />Next, Max observes that if A has finite products then, for any B, the bicategory [B,A] of pseudofunctors, pseudonatural transformations, and modifications also has finite products, given `pointwise' by the products of A. -x- is an object of [A^2,A]. <br />We can use (a x b) x c and a x (b x c) as names for objects in [A^3,A] and applying the Lemma to [A^3,A](a,b,c) deduce the existence of the associator equivalence, pseudonatural in a,b, and c. The associator gives rise to two arrows (abbreviating somewhat) ((ab)c)d ===> a(b(cd)) in [A^4,A](a,b,c,d) and between these we have a unique invertible modification, the pi of [GPS]. <br />The coherence of pi is chiefly the Stasheff non-abelian 4-cocycle condition (again see [GPS]) and for this we need only apply the Lemma to [A^5,A](a,b,c,d,e) to see that the two modifications in question are equal. Of course the other data and equations are handled with similar appeals to the Lemma. <br /><br />Max was not content to stop here. In his last few days he had been learning the rather subtle definition of _symmetric_ monoidal bicategory and constructed the requisite braiding equivalence and syllepsis isomorphism for a bicategory with finite products. Everything follows from the universal property but Max has shown us _how_ so that we can calculate with these things. His insights show us the way to deal with coherence issues arising from birepresentability generally and weak n-representability when the need arises. <br /><br />Max's personal copy of [GPS] was autographed by Ross with the words ``To Max Kelly, a master of coherence''. Yes, he was. <br /><br />Aurelio Carboni, Robert Walters, and Richard Wood <br />------------------------------------------<br />From: George Janelidze <br />To: categories@mta.ca<br />Date: lunedì 29 gennaio 2007 12.15.06 <br /><br />Gregory Maxwell Kelly was one of the great mathematicians of our time, so perfect in his research and vision of mathematics, and in every aspect of academic life. <br /><br />And he was so exceptionally kind to everyone. It is hard to imagine that Max is not with us anymore, and it is a great pain for his family and for all of us, his friends and colleagues... <br /><br />George Janelidze <br />---------------------------------------------<br />From: Andre Joyal <br />To: categories@mta.ca <br />Date: mercoledì 31 gennaio 2007 21.17.42 <br /><br />Letter to Max Dear Max, I feel deeply sad that you have left. Now that you are gone, I realise how much you mean to me.<br /><br />I regret not telling you that.<br /><br />I wish to repair that by writing you this letter. If I send it to Imogen and to your friends, it will reach you in some way. <br /><br />Your work has been a constant source of inspiration for me. It combines beauty, rigor and depth. It is fundamental, I use it every day. It will last forever. You were a great mathematician. <br /><br />I also want to thank you for inviting me to Australia. I did some of my best work there. You were a great host. I made many friends. <br /><br />I wish we could meet again. <br /><br />I will talk with you in my dreams. <br /><br />Yours, Andre<br />---------------------------------------Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-41748202623876722072013-12-11T00:07:00.000-08:002013-12-11T00:09:12.476-08:00Tributes to Aurelio CarboniI have placed in the Como Category Archive some <a href="http://comocategoryarchive.com/Archive/conferences/aurelio-carboni/tributes/">articles</a> about Aurelio Carboni, which are based on lectures given at the <a href="http://math.unipa.it/metere/Aurelio2013/index.html">conference in memory of Aurelio</a>, June 2013, in Milan. They will be appearing in a volume of the journal Protagora, edited by Fabio Minazzi of the University of Insubria, Varese.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-39438128472961952642013-11-28T06:24:00.000-08:002013-11-28T06:31:42.284-08:00Last days IIAs I mentioned in an earlier post, my last days as professor at the University of Insubria, Como, were complicated by a sudden change of office. I don't remember whether I mentioned that I also had to complete a course on security obligatory for all teachers and staff of the university, the final date of which was my last working day. Although not relevant for me since I have no duties regarding safety in the university after my retirement I was afraid there might be legal consequences if I did not complete the course as specified.<br /><br />I thought these complications were an aspect of Italian life, but I see now similar problems occur in England. In Peter Cameron's blog he<a href="http://cameroncounts.wordpress.com/2013/11/25/moving-out/"> racounts</a> being removed from his office, and losing his status as a lecturer before his course ended. He had my problem of what to do with all the books and papers.<br /><br />(When I decided to leave the University of Sydney in 1998 I was careful not to tell anyone about it till the last moment exactly for fear of being regarded prematurely redundant.)Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com1tag:blogger.com,1999:blog-7446594.post-61381423676129286632013-11-26T04:23:00.001-08:002013-11-26T12:51:15.094-08:00A walk near LeccoA few days ago I was going to write a post about how depressing I find November with its lack of light. Today however we went for a walk along the Adda in delightful weather. Here are some photographs made with my phone:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-jXeAjrOW7_k/UpSRobg4weI/AAAAAAAABgE/_hcHAwiq63Y/s1600/2013-11-26+08.48.12.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://4.bp.blogspot.com/-jXeAjrOW7_k/UpSRobg4weI/AAAAAAAABgE/_hcHAwiq63Y/s320/2013-11-26+08.48.12.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">The next photograph has Grigna in the background.</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-eQWJmNLAmSo/UpSR4jHLcBI/AAAAAAAABgM/W7QxPOs-IyY/s1600/2013-11-26+08.53.10.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://1.bp.blogspot.com/-eQWJmNLAmSo/UpSR4jHLcBI/AAAAAAAABgM/W7QxPOs-IyY/s320/2013-11-26+08.53.10.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">In the centre background of the next is Resegone:</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-9wNiznBzGNQ/UpSSYR1NPRI/AAAAAAAABgU/h7uLi73UxNc/s1600/2013-11-26+09.20.53.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://3.bp.blogspot.com/-9wNiznBzGNQ/UpSSYR1NPRI/AAAAAAAABgU/h7uLi73UxNc/s320/2013-11-26+09.20.53.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">Another photograph of Grigna with Lago di Olginate in the foreground:</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-RT7Dxm9K1X4/UpSSqr0dMtI/AAAAAAAABgc/NySlCh-uUOU/s1600/2013-11-26+09.40.27.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-RT7Dxm9K1X4/UpSSqr0dMtI/AAAAAAAABgc/NySlCh-uUOU/s320/2013-11-26+09.40.27.jpg" width="240" /></a></div><div class="separator" style="clear: both; text-align: left;">A last photograph of Grigna and Resegone (with swan):</div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-DhabtYRSNrY/UpUJh_rRUxI/AAAAAAAABgs/zSgafVZjSUw/s1600/2013-11-26+09.20.43.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://4.bp.blogspot.com/-DhabtYRSNrY/UpUJh_rRUxI/AAAAAAAABgs/zSgafVZjSUw/s320/2013-11-26+09.20.43.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><br /><br /><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-77702926508166824122013-10-17T08:21:00.001-07:002013-11-02T11:16:59.416-07:00Yoneda Structures<div dir="ltr">Going through my papers I found notes of a lecture I gave on Yoneda Structures in December 1971, at a meeting (13-17 December) at the University of New South Wales while Peter Freyd was visiting. A pdf file may be had by following <a href="https://dl.dropboxusercontent.com/u/92056191/Archive/temporary_new_material/yoneda.pdf">this link</a>.</div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-35010262747492601432013-10-25T20:56:00.001-07:002013-11-02T01:51:05.961-07:00Sydney Mathematics Department <div class="separator" style="clear: both; text-align: left;">Below is a list found amongst my papers of staff and students of the University of Sydney Mathematics Department from 1935 till 1968, the period in which T.G. Room was professor. You may notice that in one year Ross Street, Brian Day and Vaughan Pratt were students together. A little earlier Graeme Segal was a student. Max Kelly was on the staff, and had been a student earlier. I miss out in two ways: before 1935 my great uncle Horatio Scott Carslaw was Professor for many years, and in 1970 I joined the staff. I think the list was compiled by Sam Conlon.</div><a name='more'></a><br /><div style="text-align: center;"><br /></div><div align="center" dir="ltr"><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-dA4mUdreSww/Umu_Cu6G5-I/AAAAAAAABeo/Dz7Ja9anZdg/s1600/2013-10-25+11.27.03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-dA4mUdreSww/Umu_Cu6G5-I/AAAAAAAABeo/Dz7Ja9anZdg/s200/2013-10-25+11.27.03.png" width="121" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-56-voiwEmz0/UmyunHc_jZI/AAAAAAAABe4/vFzxBUveG7U/s1600/2013-10-25+21.29.29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://4.bp.blogspot.com/-56-voiwEmz0/UmyunHc_jZI/AAAAAAAABe4/vFzxBUveG7U/s200/2013-10-25+21.29.29.png" width="133" /></a></div><a href="http://1.bp.blogspot.com/-zw19g3DZuUw/Ums8dXc8RoI/AAAAAAAABdY/w7pa8MboWqA/s1600/2013-10-25+12.29.15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-zw19g3DZuUw/Ums8dXc8RoI/AAAAAAAABdY/w7pa8MboWqA/s200/2013-10-25+12.29.15.png" width="130" /></a><a href="http://4.bp.blogspot.com/--clAdN5d2V0/Ums8pO3HEeI/AAAAAAAABdg/0IJaKnyGnVA/s1600/2013-10-25+21.29.29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-65CJ7DTj_qg/Ums80BBz3WI/AAAAAAAABdo/Qj2E99p87MA/s1600/2013-10-25+21.31.51.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-65CJ7DTj_qg/Ums80BBz3WI/AAAAAAAABdo/Qj2E99p87MA/s200/2013-10-25+21.31.51.png" width="119" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Ue975Apa7uU/Ums8-s2Mz3I/AAAAAAAABdw/pl9MXcTW9jY/s1600/2013-10-25+21.33.12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://2.bp.blogspot.com/-Ue975Apa7uU/Ums8-s2Mz3I/AAAAAAAABdw/pl9MXcTW9jY/s200/2013-10-25+21.33.12.png" width="133" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-wRDj-NYQsbo/Ums9K3hSdDI/AAAAAAAABd4/_M-y9VfCj4o/s1600/2013-10-25+21.34.26.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://1.bp.blogspot.com/-wRDj-NYQsbo/Ums9K3hSdDI/AAAAAAAABd4/_M-y9VfCj4o/s200/2013-10-25+21.34.26.png" width="125" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-zhZD9zlFvKA/Ums9WHeI9nI/AAAAAAAABeA/ffJVOMu9HdM/s1600/2013-10-25+21.35.29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://2.bp.blogspot.com/-zhZD9zlFvKA/Ums9WHeI9nI/AAAAAAAABeA/ffJVOMu9HdM/s200/2013-10-25+21.35.29.png" width="135" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div dir="ltr"><br /></div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-23181442265933941652013-10-31T10:03:00.001-07:002013-10-31T10:15:05.800-07:00Castiglione Olona<div dir="ltr">We go frequently to Varese but last Saturday I visited for the first time the beautiful mediaeval village of Castiglone Olona, a few kilometers from the centre of Varese. Here is a detail of frescoes by Masolino in the battistero of the Collegiata church (taken before I realized that taking photographs was forbidden)</div><div class="separator" style="clear: both; text-align: center;"> </div><div class="separator" style="clear: both; text-align: center;"><a href="http://lh4.ggpht.com/-rHfj2oP2sTc/UnKNQkRPEfI/AAAAAAAABfQ/H90nV6LMjL8/s1600/20131026_102822.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="150" src="http://lh4.ggpht.com/-rHfj2oP2sTc/UnKNQkRPEfI/AAAAAAAABfQ/H90nV6LMjL8/s200/20131026_102822.jpg" width="200" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">On a different matter, I used to accommodate guests at a beautiful ex-monastery in Calolzio-Corte until I discovered that some were allergic to bells.</div><div class="separator" style="clear: both; text-align: left;"> </div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0