tag:blogger.com,1999:blog-74465942014-09-21T10:50:32.405-07:00Bob WaltersRobert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.comBlogger231125tag:blogger.com,1999:blog-7446594.post-33412818474964542762014-09-19T08:25:00.003-07:002014-09-21T01:19:58.629-07:00Sleeping beauty problem III<a href="http://rfcwalters.blogspot.it/2014/08/the-sleeping-beauty-problem-how-some.html">Previous post in this series</a><br /><br />I am writing a third post about the sleeping beauty problem partly because I have seen <a href="http://www.stubbornmule.net/2014/08/sleeping-beauty/">Sean Carmody's post</a> (26th August) in which he describes the problem and expresses his uncertainty about it, and promises an opinion in a future post. We are waiting, Sean, for your ideas on the matter!<br /><br /><br /><a name='more'></a><br /><br />$\bf Note\; about\; comments\; on\; this\; blog $<br />I have never taken very seriously responses to posts at this blog because (apart from spam) there have been very few. After the sleeping beauty posts I did receive quite a few comments as a result of <a href="http://motls.blogspot.com/2014/08/sleeping-beauty-thirders-rudimentary.html">Lubos Motl's reposting </a> on his blog. I allowed several, mostly anonymous, but then I decided I prefer to have some idea of the identity of persons with whom I am discussing. I made it a bit harder to respond to posts. I may reverse that policy since the blog has returned to normal, however in general I will not accept anonymous responses. On popular blogs one sees hundreds of anonymous comments of very variable quality. It is not what I have in mind for this blog. Of course I am interested in serious responses which may persuade me to change my opinions.<br /><br />$\bf Remarks\; about\; the \; sleeping\; beauty\; problem$<br />I want to make a couple of remarks about the sleeping beauty problem. Most people who take the thirder position are not talking about probability but rather the average amount of time spent in a state, or similar concepts such as how much one could win by betting on a state. This is certainly not the probability of arriving in a state at a specified time or times, and in my opinion is not the question under consideration.<br />The other curious feature of discussion of this clearly stated problem is that everyone wants to consider modifications of the problem in trying to persuade their opponents. I am going to commit a similar sin in this post.<br /><br />But first a comment on a strange aspect of the thirder position. Conditional probability classically is about restricting the possible (atomic) events or actions, and results in some events having zero probability, while the remaining events have $\bf increased$ probability. Instead, in the thirder "solution" an action which has probability $1/2$ is changed to have a $\bf lesser$ probability (of $1/3$) as a result of a condition. Strange!<br /><br />$\bf A\; true\; story:\; The \; Tirano-Torino \; Problem$<br />$NOT\; the\; sleeping\; beauty\; problem$<br />(names changed to protect the innocent)<br />One day I was running to catch a train at Milano Centrale, and I had no time to check the platform. I found two trains which were just beginning to depart, one for Torino and one for Tirano, but I did not know which was which. I had to choose which I did with no information. Now it was a dark and stormy night, and the stations are badly signed so after one, two and three stations I was still unable to identify which station we were passing. What was my opinion as to whether I had caught the right train? I had not a clue. I estimated $50\%$.<br /><br />Now let's take the position that I could bet at each station which train I was on. There are more stations from Milan to Tirano than to Torino so I should bet that I am on the Tirano train.<br /><br />Or alternatively, as suggested by one commenter, I should imagine 1000 computers simulating the train voyages repeatedly starting at different times, and obviously I would see that more of the trains were on the Tirano line.<br /><br />Great arguments but I had no idea which train I was on, until of course I recognized a station.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-76750608779192553472014-08-05T02:10:00.006-07:002014-09-19T08:38:50.022-07:00The sleeping beauty problem: how some philosphers and physicists calculate probability<a href="http://rfcwalters.blogspot.it/2014/08/the-sleeping-beauty-problem-how-some.html">Next post in this series</a><br /><br />Recently there have been surprising discussions and disputes amongst philosophers and physicists about an elementary probability problem called the Sleeping Beauty problem. The following remarks are, as usual in this blog, the result of discussions with Nicoletta Sabadini.<br /><br />The problem goes as follows: A beauty is told that the following procedure will be carried out. On Sunday a fair coin will be tossed without her knowing the result. She will go to sleep. Then on Monday one of two possibilities will occur.<br /><br />In the case that the toss of the coin resulted in tails she will be wakened and asked her opinion of the probability that the result of coin was heads. She will then have her memory of what happened on Monday erased and will be put to sleep. On Tuesday (again in the case of tails, without a further toss of the coin) she will be wakened and asked her estimate of the result of the coin toss being heads.<br /><br />In the case of heads, on Monday she will be asked her estimate of the probability that the result of the coin toss was heads. In that case she will not be asked again.<br /><br />It seems clear intuitively that, when this procedure is carried out, in all three responses she has learnt nothing about the result of the coin toss, and that she should answer in each case $1/2$.<br /><br />Strangely a considerable number of philosophers and physicists make an elementary error in the calculation of probabilities and believe that she should answer $1/3$.<br /><br /><b>Update:</b> I see also that a recipient of the COPSS Presidents' Award (the Nobel Prize of Statistics, according to Wikipedia), Jeffrey S. Rosenthal, believes the $1/3$ answer.<br /><br /><a name='more'></a>The philosophers who come to this conclusion include professors at Princeton (Adam Elga) and Oxford (Nick Bostrom). The physicists include a Senior Research Associate at Caltech, Sean Carroll, who manages to involve quantum mechanics and the multiverse. Another seems to be a Dirac and Milner prizewinner, Joe Polchinski. (Not all physicists agree - for example, the maverick (or main stream?) physicist <a href="http://motls.blogspot.it/2014/07/the-sleeping-beauty-problem.html">Lubos Motl </a>is scathing about the answer $1/3$.)<br /><br /><b>Update</b>: Lubos Motl has reblogged <a href="http://motls.blogspot.it/2014/08/sleeping-beauty-thirders-rudimentary.html">this post on his blog</a>, with additional interesting comments.<br /><br />Let's see how they manage to come to the $1/3$ conclusion and what their elementary error is.<br /><br />We can describe the procedure by a simple Markov chain, pictured below, in which the initial state (Sunday) is $0$, the state $1$ is Monday in the case of heads, the state $2$ is Tuesday and subsequent dates after heads, the state $3$ is Monday in the case of tails, the state $4$ is Tuesday and subsequent dates in the case tails.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-okVWhawInl4/U-DIckLs5JI/AAAAAAAABzY/qBMeTS1gCB8/s1600/markov.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-okVWhawInl4/U-DIckLs5JI/AAAAAAAABzY/qBMeTS1gCB8/s1600/markov.jpg" height="234" width="320" /></a></div>Now the argument of the philosophers/physicists mentioned above is that when the beauty is awakened she knows she is in one of states 1, 3 or 4 but she doesn't know which.<br />They claim that being in each of these three states has the same probability, and hence this probability must be $1/3$ (the sum must be $1$). But in state $1$ she knows the coin was heads, and in $3$ and $4$ the coin was tails. So in a third of cases the coin is heads, so she must answer $1/3$.<br /><br /><i><b>The egregious error!</b></i><br />In a Markov process one cannot in general talk about <i>the probability of being in a state. </i>One must talk about the probability<i> </i>of passing from state $x$ to state $y$ in $n$ steps.<br /><i> </i>(If the process is ergodic then in a limiting distribution there is some sense of talking about the probability of being in a state, but that is not the case here.)<br /><br />The three probabilities discussed above are (i) the probability $p_1$ of going from $0$ to $1$ in <i>one step</i>, (ii) the probability $p_3$ of going from $0$ to $3$ in <i>one step</i>, and (iii) the probability $p_4$ of going from $0$ to $4$ in<i> two steps</i>. Each of these three values is clearly $1/2$.<br /><br />What however should the beauty calculate for the probability $p_{3,4}$ of being <i>either</i> in $3$ in one step or $4$ in two steps? The philosophers/physicists are claiming that $p_{3,4}=2p_1$. I claim $p_{3,4}=p_1$. To see this we need to think in general how to calculate the probability of going from state $x$ to $y$ in $m$ or $n$ steps in a Markov chain.<br /><br />Take $x$ to be a fixed initial state, which we wont mention again. The sum over all $y$ of $p_{one}(y)$ (the probability of reaching $y$ in one step) is $1$. The sum over all $y$ of $p_{two}(y)$ (the probability of reaching $y$ in two steps) is $1$. Hence taking $p_{one,two}(y)=p_{one}(y)+p_{two}(y)$ as the physicists/philosophers do would yield $\sum_yp_{one,two}(y)=2$ whereas the total probability should be one.<br />A reasonable definition for $p_{one,two}$ would be $p_{one,two}(y)=\frac{1}{2}( p_{one}(y)+p_{two}(y))$.<br /><br />This definition yields in the sleeping beauty problem that $p_{3,4}=\frac{1}{2}(p_3+p_4)= \frac{1}{2}=p_1$.<br /><br /><br /><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com8tag:blogger.com,1999:blog-7446594.post-40775908205269622232014-08-09T04:54:00.001-07:002014-09-19T08:37:18.830-07:00The sleeping beauty problem: how some statisticians calculate probability<a href="http://rfcwalters.blogspot.it/2014/09/sleeping-beauty-problem-iii.html">Next post in this series</a>; <a href="http://rfcwalters.blogspot.it/2014/08/the-sleeping-beauty-problem-how.html">previous post</a><br /><br /><br />This problem was drawn to my attention by calculations of physicists and philosophers (see my last post), but I see that some eminent statisticians (Jeffrey S. Rosenthal) are also convinced that the answer is $1/3$.<br /><br />What I think is the real problem is that statisticians immediately start calculating with conditional probability, Bayes formula etc without modelling the problem mathematically. It is also important to generalize a problem to avoid being confused by particular details.<br />In any case, I want to clarify what I said in the last post by considering a simple example.<br /><br /><a name='more'></a>The essential point of the sleeping beauty problem (see my last post) is that the beauty is asked to estimate a probability without knowing how many steps of the process have occurred.<br /><br /><b>A simpler problem</b><br />Consider the following Markov chain with $100$ states meant to represent the birthdays of my life:<br />$0\xrightarrow{1}1\xrightarrow{1}2 \xrightarrow{1}\cdots \xrightarrow{1}99\xrightarrow{1}100 $,<br />with also a loop of probability $1$ at $100$ when I am in my tomb.<br /><br />Now it is clear that the probability of my being $1$ after $1$ year if $1$, the probability of being $2$ after $2$ years is $1$, the probability of being $n$ after $n$ years is $1$ ($n\leq 99$).<br /><br />Moreover if you ask me what is the probability of being $5$ say during my life (that is, after $0$ or $1$ or $\cdots$ or $99$ years) I might be tempted to answer $1$ since it is certain I will be $5$ during my life.<br /><br />However the sum of the probabilities that I am $n$ ($n\leq 99$) during my life must add to total probability $1$, and hence I must answer that the probability of being $5$ in my life is $1/100$. That is, rarely am I five.<br /><br />Notice that if I ask what is the probability of being alive (that is, in one of the states $n$ (0$\leq n \leq 99$) in a period of $\leq 99,999$ years, I must answer $1/(1000)$; that is, mostly you will find me dead.<br /><br /><b>The sleeping beauty problem</b><br />Applying the same reasoning to the sleeping beauty Markov chain in the last post, sleeping beauty must calculate the probabilities in the case of tails not knowing whether one or two days have passed, and so she must assign $1/4$ to the probability of Monday after tails, and $1/4$ to the probability of Tuesday after tails, so that the probability of Monday or Tuesday after tails is $1/2$.<br /><br />There is a complication I didn't mention explicitly in the last post. In the case of heads the beauty knows that only one step has occurred. Hence she should calculate the probability of the heads case using the probabilities in one step, not one or two steps. Hence she should estimate the probability of Monday as $1/2$ after heads.<br /><br />Notice that she has calculated probabilities in two different probability spaces. The fact that the sum of these two probabilities is $1$ means that knowing that she is being interviewed gives her no new information, or said in another way, conditioning on the fact that an interview is in process does not require normalization of the probabilities.<br /><br /><b>A statistician's argument against</b> $1/2$<br />Let us see how one statistician claims to get a contradiction from this estimate. I will describe an argument from a paper. (The article begins sloppily because in stating the problem it does not specify that the beauty knows the protocol of the procedure to be carried out.)<br /><br />The sample space is not described, and only partial inference is made about probabilities but sufficient to convince the author that $P(Heads)< \frac{1}{2}$. We will see that the error in the argument is that the beauty must make calculations in two different probability spaces (one step, and one or two step).<br /><br />The argument begins as follows: $P(Heads|Monday)=1/2$, $P(Heads|Tuesday)=0$, $P(Tuesday)\geq 0$, $P(Monday)=1-P(Tuesday)$ and hence $P(Monday)< 1$. All these probabilities are conditioned on the fact that an interview is taking place.<br /><br />Aside from the fact that he is really talking about two probability spaces (one step, and one or two steps) I could agree with these probabilities. However the real error comes in his next step, when he seeks to apply the law of total probability:<br />$P(Heads)=P(Heads|Monday)P(Monday)+P(Heads|Tueday)P(Tuesday)$ <br />$P((Heads)=(1/2)P(Monday)+0\times P(Tuesday)< (1/2)$<br />contradicting $P(Heads)=1/2$.<br /><br />The error comes from applying the law of total probability to two different probability spaces where it is not valid. Part of the equation refers to probabilities for one step, the other part to one or two steps.<br /><br /><br /><br /><br /><br /><br /><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com5tag:blogger.com,1999:blog-7446594.post-78931135912137543942014-09-19T06:35:00.002-07:002014-09-19T07:24:20.949-07:00Opera d'abbaco del reverendo padre don Smiraldo BorghettiHere are some pages from a book I bought some years ago on arithmetic, published in 1594.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8l_Yqzv-6iQ/VBwaH9XTK8I/AAAAAAAAB2A/2w91t5bzAGs/s1600/20140919_124834.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8l_Yqzv-6iQ/VBwaH9XTK8I/AAAAAAAAB2A/2w91t5bzAGs/s1600/20140919_124834.jpg" height="240" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-1u2URgkVgMk/VBwvXF-IlOI/AAAAAAAAB3U/SOSJacGShHY/s1600/2014-09-19%2B15.22.00.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-1u2URgkVgMk/VBwvXF-IlOI/AAAAAAAAB3U/SOSJacGShHY/s1600/2014-09-19%2B15.22.00.jpg" height="240" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">It is interesting to read that without numbers the world would be without order, and a horrible chaos.</div><div class="separator" style="clear: both; text-align: left;">Strangely the current popular view (which I do not share) is that the world is chaos, even to the extent that the laws of physics are the casual effect of which universe of the multiverse we happen to inhabit.<a href="http://1.bp.blogspot.com/-cKvgefGFmW4/VBwaFPmKd4I/AAAAAAAAB14/yPmcPJcBF_o/s1600/20140919_124849.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-81479698627079948722014-09-14T10:15:00.001-07:002014-09-18T00:45:46.827-07:00Lex total categories and Grothendieck toposes IV<a href="http://rfcwalters.blogspot.it/2014/09/lex-total-categories-and-grothendieck.html">Previous post in this series</a>; next post.<br /><br />The aim of this post is to complete the proof that well-powered lex total categories are elementary toposes, by proving that they have subobject classifiers.<br /><br /><a name='more'></a><br />But first let's do something simpler, namely prove that a total category ${\bf A}$ has a terminal object. (We will use the same strategy later for constructing the subobject classifier of a well-powered lex total category.)<br /><br />$\bf Terminal\; object$<br /><br />It is clear that ${\bf A}$ has a terminal object iff the constant functor ${\bf A}^{op}\to Sets: A\mapsto 1$ is representable; its representing object is a terminal object. Further, in general, the colimit of a representable functor considered as a discrete fibration is the representing object. But the constant functor considered as a discrete fibration over ${\bf A}$ is the identity functor $1_{\bf A}:{\bf A}\to {\bf A}$ which has small fibres and so we can form $colim(1_{\bf A})$, which we write informally as $colim_{A\in {\bf A}}A$ and also $Q$ for short. We claim that this object is the terminal object of ${\bf A}$.<br /><br />Denote the universal cocone from $1_{\bf A}$ to $Q$ as $\lambda_A:A\to Q \; (A\in {\bf A})$. The $\lambda_A$ provide the existence of arrows from each object to $Q$; what remains to show is the uniqueness. Notice that for each $f:A\to Q$ we have the cocone property $\lambda_{Q}f=\lambda_A$, so in particular taking $f=\lambda_A:A\to Q$ we have $\lambda_Q\lambda_A=\lambda_A=1_A\lambda_A$. By the uniqueness property of the universal cocone this implies that $\lambda_Q=1_Q$. Then returning to the equation $\lambda_{Q}f=\lambda_A$ we see that $f=\lambda_A$ for any $f:A\to Q$, the uniqueness. <br /><br />It is clear, by the way, that a similar argument shows that totally cocomplete categories have all small limits.<br /><br />$\bf Subobject\; classifier$<br /><br />We want to do the same thing now representing a different functor $sub:{\bf A}^{op}\to Sets:A\mapsto sub(A)$ where $sub(A)$ is the set of subojects of $A$ which is small by assumption. We will denote the subobject of $A$ containing the mono $U\to A$ as $[U\to A]$ or just $[U]$. The effect of $sub$ on arrow $f:A\to B$ is $[U\to B]\mapsto [f^{-1}(U)]$. To obtain the subobject classifier we take the colimit of the corresponding fibration which we denote as $colim_{[U\to A]}A$ or more briefly as $\Omega$. Denote the universal cocone of the colimit as $\lambda_{[U\to A]}:A\to \Omega$ $(A\in {\bf A})$. Let $I$ be the terminal object of ${\bf A}$, and denote $\lambda_{[1_I:I\to I]}:I\to \Omega$ as $true:I\to \Omega$.<br /><br />Now to show that $\Omega$ is the subobject classifier we will prove (i) that given any $U\to A$ then the pullback of $true:I\to \Omega$ along $\lambda_{[U\to A]}:A\to \Omega$ is $U\to A$, and (ii) if $f:A\to \Omega$ satisfies $f^{-1}(true)=U\to A$ then $f=\lambda_{[U\to A]}:A\to \Omega$. Now (i) requires the left exactness of colimit, and we shall delay proving (i). <br /><br />First assuming (i) we will prove (ii). <br />Consider $f:A\to \Omega$ such that $f^{-1}(true)=U\to A$. Then $f$ is a morphism from $(A,[U])$ to $(\Omega,[true])$ in the domain of the fibration $sub$ and hence we have the equation (*) $\lambda_{true}f=\lambda_{[U]}$. Assuming (i) we may put $f=\lambda_{[U]}$ in this equation to yield $\lambda_{true}\lambda_{[U]}=\lambda_{[U]}=1_{\Omega}\lambda_{[U]}$. The uniqueness property of the universal cocone implies that $\lambda_{true}=1_{\Omega}$. Substituting this in (*) we obtain $f=\lambda_{[U]}$. <br /><br />Now to prove (i).<br />I am getting tired so I will just state the necessary facts and leave you to check them.<br />Consider mono $U\to A$. Consider the following arrows in $Sets^{{\bf A}^{[op]}}$: (i) $Hom(-,A)\to sub:1_A\mapsto [U]$, (ii) $Hom(-,I)\to sub:1_A\mapsto [1_I]$. It is easy to check that the pullback of these two arrows is $Hom(-,U)$ with the obvious projections. It is a little less easy to check, but straightforward, that $colim$ applied to this pullback diagram gives exactly the diagram we wish to show a pullback. Since $colim$ preserves finite limits we have the desired result.<br /><br />$\bf Note$<br />I just realized that the first statement that well-powered lex total categories are elementary toposes was made in my lecture to <a href="http://comocategoryarchive.com/Archive/temporary_new_material/Isle-of-Thorns-1976.pdf">The Isle of Thorns conference , July 25-31, 1976 entitled Total Cocompleteness.</a>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-46478973420309278312014-09-17T00:11:00.002-07:002014-09-17T09:14:55.975-07:00Giuseppe PeanoOn Saturday I bought for 2 euros at the mercatino of Lavello a little book by Giuseppe Peano of numerical tables. It has an interesting preface by Peano.<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-tNPdl7E0nU4/VBkyoQQFkUI/AAAAAAAAB1Q/l5BJl2UqkoM/s1600/20140917_083229.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-tNPdl7E0nU4/VBkyoQQFkUI/AAAAAAAAB1Q/l5BJl2UqkoM/s1600/20140917_083229.jpg" height="320" width="240" /></a></div><br /><a name='more'></a>I suppose Peano is best known for axiomatizing the natural numbers. He introduced the set theory symbols $\epsilon$, $\subset$, $\cap$, $\cup$.<br />The cover of the little book:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-IbhmAMsILcw/VBkz77P9QdI/AAAAAAAAB1Y/1n2QMQ_Z2ng/s1600/20140917_083205.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-IbhmAMsILcw/VBkz77P9QdI/AAAAAAAAB1Y/1n2QMQ_Z2ng/s1600/20140917_083205.jpg" height="320" width="240" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-air6OKwOoEY/VBk0LShkz3I/AAAAAAAAB1g/9Ry5IIcZASA/s1600/20140917_083219.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-air6OKwOoEY/VBk0LShkz3I/AAAAAAAAB1g/9Ry5IIcZASA/s1600/20140917_083219.jpg" height="320" width="240" /></a></div><br /><br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-80902079474617602322014-09-16T13:01:00.005-07:002014-09-17T06:05:12.099-07:00Ricordo di Aurelio Carboni, Matematico e Filosofo<div dir="ltr">I have just received a volume of "Il Protagora" in which there is a section devoted to Aurelio's memory.</div><div dir="ltr">There are four articles all in<br />Il Protagora, Volume XL, July-December 2013, sesta serie, n. 20</div><div dir="ltr">The articles are</div><div dir="ltr">Fabio Minazzi, Un ricordo di Aurelio Carboni, pp. 489-494</div><div dir="ltr">F. William Lawvere, Farewell to Aurelio, pp. 495-498</div><div dir="ltr">R.F.C. Walters, Working with Aurelio - Tangled Lives, pp. 49<u>9</u>-504</div><div dir="ltr">George Janelidze, An open letter to Aurelio Carboni and all Mathematicians who remember him, pp. 505-513</div><div dir="ltr"><br /></div><div dir="ltr"><br /></div><div dir="ltr">Prepublication versions of three of the papers are at the <a href="http://comocategoryarchive.com/Archive/conferences/aurelio-carboni/tributes/">Como Category Archive</a>.</div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-43740881635189645792014-09-03T17:28:00.002-07:002014-09-14T10:16:26.995-07:00Lex total categories and Grothendieck toposes III<br /><a href="http://rfcwalters.blogspot.it/2014/08/lex-total-categories-and-grothendieck.html">Previous post in this series</a>;<a href="http://rfcwalters.blogspot.it/2014/09/lex-total-categories-and-grothendieck_14.html"> next post in this series</a>.<br /><br />Today I want to give an application of the adjoint functor theorem for totally complete categories, namely to show that lex total categories are cartesian closed. The proof should be a generalization of the proof that locales are cartesian closed so we should look at that before attempting the case of lex total categories. We will be repeating some of the discussion of the first post in this series.<br /><br /><a name='more'></a><br /><br />(i) By the adjoint functor theorem for complete posets it is sufficient to show that if $A$ is a locale and $x$ is an element then the functor $x\wedge -:A\to A$ preserves sups of downsets. That is, we need to show that if $U$ is a downset then $x\wedge sup(U)=sup(x\wedge u; u\in U)$.<br /><br />(ii) We denote the downset generated by a subset $V$ as $\langle V\rangle $ and it is clear that $sup(V)=sup(\langle V\rangle )$.<br /><br />(iii) Notice that $\{x\wedge u;u\in U\}$ is not necessarily a downset. The crucial thing to prove is that $\langle \{x\wedge u; u \in U\}\rangle =\langle x\rangle \cap U$, since then, by the left exactness of $sup: PA\to A$ it follows that<br /><br /> $sup(\{x\wedge u; u\in U\})=sup(\langle \{x\wedge u; u \in U\}\rangle )$<br />$=sup(\langle x\rangle \cap U)=sup(\langle x\rangle )\wedge sup(U)$<br />$=x\wedge sup(U)$<br /> as required.<br /><br />(iv) To prove that $\langle \{x\wedge u; u \in U\}\rangle =\langle x\rangle \cap U$ notice that an element of the left-hand-side is an element $y\in A$ such that $y\leq x\wedge u; u \in U$; that is, such that $y\leq x$ and $y\leq u$ for some $u\in U$; that is, such that $y$ is in the right-hand-side. The converse is equally evident.<br /><br />$\bf Generalizing\;\; to\;\; categories:$<br /><br /> By the adjoint functor theorem for total categories it is sufficient to show that if $A$ is a lex total category and $x$ is an object then the functor $x\times -:A\to A$ preserves colimits of discrete fibrations $U:B\to A$ with small fibres. That is, we need to show that $x\times colim(U)=colim(x\times U(-))$ for such $U$.<br /><br />Notice that $x\times U(-):B\to A$ is not necessarily a discrete fibration, so we need the analogy of the downset generated by a subset, which for categories is achieved by the factorization into a cofinal functor followed by a discrete fibration.<br /><br /><br />$\bf Interlude\; about\; a\; factorization:$<br />The factorization of a functor $V:B\to A$ as $B\xrightarrow{V_1}\langle V\rangle \xrightarrow{V_2}A$ goes as follows. First I will describe the category $\langle V\rangle $. $\langle V\rangle $ is the category with objects equivalence classes of triples $(a,\alpha :a\to V(b),b);a, \alpha\in A,b\in B$. Notice that this generalizes the requirement in a poset that $a\leq V(b)$ but in a category $a$ may (be less than or equal to =) have an arrow to many $V(b)$ and with different arrows. Hence the equivalence relation to be explained. We will denote the equivalence class of $(a,\alpha :a\to V(b),b)$ by $[(a,\alpha :a\to V(b),b)]$. Then the equivalence relation is specified by the requirement that $[(a,\alpha :a\to V(b),b)]=[(a,\alpha V(\beta) :a\to (b'),b')]$ for any $\beta : b\to b'\in B$. Finally an arrow in $\langle V\rangle $ is an arrow $\alpha':a'\to a: [(a',\alpha'\alpha ,b)]\to [(a,\alpha ,b)]]$. (I omit the verification that this definition is unambiguous.)<br /><br />Now to describe the two functors of the factorization. $V_2:\langle V\rangle \to A$ is defined by $V_2([(a,\alpha ,b)])=a$ and $V_2(\alpha':a'\to a: [(a',\alpha'\alpha,b)]\to [(a,\alpha,b)])=\alpha'$. Again I omit the check that it is unambiguous. Given that, it is clear that $V_2$ is a discrete fibration. Finally the definition of $V_1$: $V_1(b)=[V(b), 1_{V(b)}, b]$ and<br />$V_1(\beta':b\to b')=V(\beta'):[V(b),1_{V(b)},V(b)]$<br />$=[V(b), V(\beta'), V(b')]\to [V(b'),1_{V(b')},V(b')]$.<br /><br />O.K. I have left out a lot of checking. I should also explain why $F_1$ is cofinal. What we need here is a little less, namely that $colimit(V)=colimit(V_2:\langle V\rangle \to A)$. The check is straightforward.<br /><br />$\bf Return\; to \; the\; proof:$ We have now reached point (iii) of the proof for posets which we would like to generalize to categories.<br />The crucial point to prove is that the discrete fibration $\langle (x\times U(-)\rangle \to A$ is $Elements(Hom(-,x))\times_A U(-)\to A$, because then the left exactness of $colim : PA\to A$ allows the following calculation:<br />$colim(x\times U(-))=colim(\langle x\times U(-)\rangle )$<br />$= colim(Elements(Hom(-,x))\times U(-) )$<br />$=colim(Elements(Hom(-,x))\times colim(U(-))$<br />$=x\times colim(U)$ as required.<br /><br /><br />So now to the crucial point, which should be a jazzed up categorical version of (iv) for locales above. We have two discrete fibrations which we want to show isomorphic (and at the same time verify that they have small fibres - I almost forgot to say that! In fact, $Elements(Hom(-,x))\times_A U(-)\to A$ clearly has small fibres).<br />I will only describe the two functors (and on objects only) which I claim form an isomorphism and leave the rest to you.<br /><br />The functor $\langle (x\times U(-)\rangle \to Elements(Hom(-,x))\times_A U(-)$.<br /><br />An object over $a$ of the left hand side is an equivalence class of the form $[(a, \alpha:a\to x\times U(b),b)]$. From a representative of the equivalence class we can extract two arrows $\alpha_1\to x$ and $\alpha_2:a\to U(b)$.<br />But since $U$ is a discrete fibration there is a unique object $b'\in B$ over $a$ and an arrow $\beta:b'\to b$ such that $U(\beta)=\alpha_2$. Clearly<br />$[(a, \alpha:a\to x\times U(b),b)]=$<br />$[(a,a\xrightarrow{\Delta}a\times a\xrightarrow{\alpha_1\times 1_a}x\times U(b')\xrightarrow{1_x\times U(\beta)}x\times U(b),b]=$<br />$[(a,a\xrightarrow{\Delta}a\times a\xrightarrow{\alpha_1\times 1_a}x\times U(b'),b']$.<br /><br />So from each equivalence class over $a$ we can extract an arrows from $a$ to $x$, and an object $b'\in B$ over $a$. But the elements over $a$ on the right hand side over $a$ are exactly such pairs.<br /><br /> The functor $ Elements(Hom(-,x))\times_A U(-)\to \langle (x\times U(-)\rangle$.<br /><br /> An object of the left hand side over $a$ consists of an arrow $alpha:a\to x$ and an object $b$ of $B$ such that $U(b)=a$. The functor takes this object to the equivalence class $[(a,a\xrightarrow{\Delta}a\times a\xrightarrow{\alpha\times 1_a}x\times U(b),b)]$.<br /><br /> I have run out of steam! I hope you see the analogy with the locale result.<br /><br />$\bf References$<br /><br />Ross Street and I announced the stronger result that well-powered lex total categories are elementary toposes in the paper<br />R.H. Street and R.F.C. Walters. Yoneda structures on 2-categories. Journal of Algebra, 50:350--379, 1978.<br />Looking around in subsequent literature however I cannot see where a proof has been published.<br /><br />A less elementary description of the comprehensive factorization was given in the original paper:<br />R. H. Street and R.F.C. Walters. Comprehensive factorization of a functor. Bull. Amer. Math. Soc., 79:936-941, 1973.<br /><br />Other papers on total and lex total categories:<br />Walter Tholen, Note on total categories, Bulletin of the Australian Mathematical Society, 21 (1980 ), 169 - 137<br /><br />Ross Street, Notions of topos, Bull. Austral. Math. Soc. VOL. 23 (1981) , 199-208.<br /><br />R.J Wood, Some remarks on total categories, J. Algebra 75:2, 1982, 538–545<br /><br />Ross Street, The family approach to total cocompleteness and toposes, Trans. Amer.Soc., 284 (1984) 355-369 <br /><br />Max Kelly, A survey of totality for enriched and ordinary categories, Cahiers de Topologie et Géométrie Différentielle Catégoriques, 27 no. 2 (1986), p. 109-132<br /><br />Brian Day, Further criteria for totality, Cahiers de Topologie et Géométrie Différentielle Catégoriques, 28 no. 1 (1987), p. 77-78Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-51895913794785393122014-09-04T01:31:00.001-07:002014-09-10T01:21:20.502-07:00Span(Graph) III: Circuits with feedback<a href="http://rfcwalters.blogspot.it/2014/08/spangraph-ii-circuits-with-feedback.html">Previous post in this series;</a> next post in the series.<br /><br />The two main operations in $\bf Circ$ are composition and parallel as for straight-line circuits, and there are a variety of constants.<br /><br />Today we describe the operation of composition of circuits with feedback in $\bf Circ$.<br /><br /><a name='more'></a><br /><br />$\bf Composition$ If $G$ is a component with left interface $B^m$ and right interface $B^n$ (an arrow in $\bf Circ$ from $B^m$ to $B^n$) and $H$ is a component from $B^n$ to $B^p$ then the composition denoted $G\bullet H$ has left interface $B^m$ and right $B^p$. The graph of $G\bullet H$ has states being all pairs $(x,y)$ of states, one $x$ of $G$ and one $y$ of $H$. A transition in $G\bullet H$ is a pair of transitions $(\alpha,\beta)$, one of $G$ and one of $H$, which however must satisfy the following requirement: the $n$-tuple of right labels of $\alpha$ must equal the $n$-tuple of left labels of $\beta$.<br /><br />This is just the obvious requirement that for both components to change state the signals on the connected ports must be equal.<br /><br />It remains to say what are the left and right labellings of transitions of the composite. The left labelling of a transition $(\alpha,\beta)$ is the left labelling of $\alpha$, and the right labelling of $(\alpha,\beta)$ is the right labelling of $\beta$.<br /><br />The picture of the composite will be the two components joined side-by-side with the right ports of $G$ joined to the left ports of $H$.<br /><br />To make sure that the definition is clear we will calculate the composite of two delay elements.<br />(I notice that I did not say that the delay component is an arrow in $\bf Circ$ from $B$ to $B$ - I hope that was obvious.)<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-VbMsI5FqE9o/VAtG4PGzIfI/AAAAAAAAB0w/blCLPB3ui4A/s1600/delay2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-VbMsI5FqE9o/VAtG4PGzIfI/AAAAAAAAB0w/blCLPB3ui4A/s1600/delay2.jpg" height="92" width="320" /></a></div><br /><br />The states of this composite are the four pairs of states, namely $(0,0)$, $(0,1)$, $(1,0)$, $(1,1)$ which I will write more briefly as $00$, $01$, $10$, $11$ to reduce clutter.<br /><br />There are $16$ candidates for transitions, namely all pairs of transitions but not all satisfy the condition that the labels on the common ports are equal.<br /><br />One transition of the composite comes from the pair $(0\xrightarrow{1/0}1, 1\xrightarrow{0/1}0)$ since these two transitions have the common signal $0$ on the middle wire.<br />This transition of the composite is then $01\xrightarrow{1/1}10$; that is it goes from state $01$ to state $10$, with the labelling being the labels of the free ports. <br /><br />A pair which does not give rise to a transition of the composite is $(0\xrightarrow{1/0}1, 1\xrightarrow{1/1}1)$ since the labels on the middle wire do not agree.<br /><br />I will list all the transitions which do occur in the composite:<br />$00\xrightarrow{0/0}00$, $00\xrightarrow{1/0}10$, $01\xrightarrow{0/1}00$, $01\xrightarrow{1/1}10$, $10\xrightarrow{0/0}01$, $10\xrightarrow{1/0}11$, $11\xrightarrow{0/1}01$, $11\xrightarrow{1/1}11$.<br />Here is the resulting component:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-yTD3-DYAxXs/VAtHAdTquLI/AAAAAAAAB04/HXirMHiMM6M/s1600/delay2b.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-yTD3-DYAxXs/VAtHAdTquLI/AAAAAAAAB04/HXirMHiMM6M/s1600/delay2b.jpg" height="172" width="320" /></a></div><br /><br />You may think of a state as the state of a 2-bit register. On a transition the first bit moves to the second place, the second bit exits as a right signal, and the left signal is inserted in the first bit of the register. The result is a step 2 delay. It is easy to generalize this to the composite of $n$ delay elements: states are the states of a $n$ bit register; on a transition all the bits are moved to the right by one place, the last bit exits as a right signal, and the left signal is recorded in the first place. All this occurs in one transition of the composed $n$ delays.<br /><br />$\bf A\; very\; important\; remark!$ The delay element is a very particular component in which the left signal drives the component into a new state, and the state determines the right signal. We will be soon dealing with components which are quite different, in which the left hand ports are nothing like input ports and in which the the signals on the ports do not determine the transitions.<br /><br /><i>You should not think (in general) of components having input and output ports. Instead you should think that the component has its own transitions, and that the left and right signals are a reflection of the transitions of the component on the interfaces.</i>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-71445593195788768702014-08-15T10:27:00.002-07:002014-09-04T01:36:21.303-07:00Span(Graph) II: Circuits with feedback<a href="http://rfcwalters.blogspot.it/2014/08/spangraph-i.html">Previous Span(Graph) post</a>; <a href="http://rfcwalters.blogspot.it/2014/09/spangraph-iii-circuits-with-feedback.html">next Span(Graph) post.</a><br /><br />As an introduction to ${\bf Span(Graph)}$ I will describe how to extend the category of straight-line circuits to allow circuits with state and feedback. But before doing that I would like to point out a couple of things about straight-line circuits.<br /><br /><a name='more'></a><br /><br />Firstly, I introduced the operation of twisting two wires $twist:B^2\to B^2$. It is easy to see that using $twist$, $1_B$, product and composition we can write an expression which gives the twisting of two wires against two wires, or $m$ wires against $n$ wires, or in fact any permutation of wires. Let's just look at twisting two across two.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-oEsX68CUVCQ/U-5CIaoFVAI/AAAAAAAABz0/zOfNL8IcW3E/s1600/twist2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-oEsX68CUVCQ/U-5CIaoFVAI/AAAAAAAABz0/zOfNL8IcW3E/s1600/twist2.jpg" height="106" width="200" /></a></div><br /><br />We can read off the expression:<br /> $(1_B\times twist\times 1_B)(twist\times twist)(1_B\times twist \times 1_B)$.<br /><br />Another operation on wires was the duplication of a wire $\Delta:B\to B^2$. But we can also duplicate two, or $n$ wires. Let's see two:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-BUysAHDUJng/U-5Ce0hfHuI/AAAAAAAABz8/OnAcRRvvAgk/s1600/diag2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-BUysAHDUJng/U-5Ce0hfHuI/AAAAAAAABz8/OnAcRRvvAgk/s1600/diag2.jpg" /></a></div><br /><br />We will call this operation $\Delta_2$, defined by $\Delta_2=(\Delta\times \Delta)(1_B\times twist\times 1_B)$. Its input/output behaviour is $(x,y)\mapsto (x,y,x,y)$.<br /><br />A final example before passing to circuits with feedback, namely an equation true for straight-line circuits but false for the circuits with feedback we are about to define. I will draw a picture of the equation: given any arrow $f:B\to B$ (but a similar equation is also true for and $f:B^m\to B^n$):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-C0T-_GuD2VI/U-5CqhUmvCI/AAAAAAAAB0E/MgIh1iUrsHw/s1600/eqn.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-C0T-_GuD2VI/U-5CqhUmvCI/AAAAAAAAB0E/MgIh1iUrsHw/s1600/eqn.jpg" height="118" width="320" /></a></div><br />That is, $f\Delta=(\Delta)(f\times f)$. If you look at the input/output behaviour of these circuits you can see they are the same namely $x\mapsto (f(x),f(x))$, however the second uses more components.<br /><br />${\bf Boolean\; circuits\; with\; state\; and\; feedback}$.<br />The aim is to define a category, denoted ${\bf Circ}$, again with objects being $B^n$ but with more general arrows, with operations similar to the ones we have defined for straight-line circuits, but with also an operation of feedback.<br /><br />Today I will just describe the arrows, which often I shall call components.<br /><br />An arrow of ${\bf Circ}$ from $B^m$ to $B^n$ consists of a finite graph $G$ whose edges are labelled by pairs of elements $(x,y)$ where $x=(x_1,x_2,\cdots ,x_m)\in B^m$, and $y=(y_1,y_2,\cdots ,y_n)\in B^n$. The vertices of the graph $G$ are called the ${\it states}$ of the component and the edges are called the ${\it transitions}$ of the component. The labels tell you what happens on the left and right hand wires of the component when a transition occurs. A ${\it behaviour}$ of the component is a path in the graph $G$ which yields also a sequence of left and right labels.<br /><br />Let's look at a simple example. Notice that we separate the left and right labels by a slash.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-huP6BYtYrJI/U-5C12hRo_I/AAAAAAAAB0M/wiENDsIdiuc/s1600/delay.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-huP6BYtYrJI/U-5C12hRo_I/AAAAAAAAB0M/wiENDsIdiuc/s1600/delay.jpg" height="171" width="320" /></a></div><br /><br />This is a two state component with four transitions but what should we call it? Let's see what kind of behaviour it has. Remember a behaviour is a path in the graph. The sequence of left-labels on a path may be thought of as a signal occuring on the left as the transitions occur; similarly the sequence of right-labels is the signal on the right.<br /><br />So here is a behaviour starting in state $0$:<br /><br />$0\xrightarrow{0/0}0\xrightarrow{1/0}1\xrightarrow{0/1}0\xrightarrow{0/0}0\xrightarrow{1/0}1\xrightarrow{0/1}0\xrightarrow{0/0}0$.<br /><br />The signal on the left for this behaviour is $0,1,0,0,1,0,0$ whereas the signal on the right is $0,0,1,0,0,1,0 $. You may notice that the signal on the right is the left signal ${\it delayed}$ by one step. In fact this component is a ${\it unit\; delay\; element}$.<br /><br />Why does it act that way? The reason is that the next state is determined by the left signal (the signal is remembered as state) and the next right signal is determined by the current state (which was the last left signal).<br /><br />You may wonder why I speak of left signal, right signal instead of input and output signals. It is crucial for what we do next that left and right do not necessarily correspond to input or output respectively.<br /><div><br /></div>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com4tag:blogger.com,1999:blog-7446594.post-66072044942664175062014-08-13T05:33:00.000-07:002014-09-03T17:49:35.255-07:00Lex total categories and Grothendieck toposes II<a href="http://rfcwalters.blogspot.it/2014/09/lex-total-categories-and-grothendieck.html">Next post in this series.</a><br /><br />I said in the<a href="http://rfcwalters.blogspot.it/2014/07/lex-total-categories-and-grothendieck.html"> first post</a> in this series that totally complete categories have a strong adjoint functor theorem.<br /><br />Here is <i>the ad</i><i>joint functor theorem</i>: if $A$ and $B$ are locally small categories, $A$ is totally cocomplete and $F: A\to B$ is a functor which preserves colimits of discrete fibrations with small fibres then $F$ has a right adjoint. <br /><br /><a name='more'></a><br /><br />Before indicating a proof of this result let us look at the<i> adjoint functor theorem for posets</i>. <br />If $F:A\to B $ is an order preserving function, $A$ is a complete poset and $F$ preserves sups then $F$ has a right adjoint $U$ with $U(b)=sup_{F(a)\leq b}(a)$. <br /><i>proof</i> Certainly $U$ so defined has the property that if $F(a)\leq b$ then $a\leq U(b)$. But also since $F$ preserves $sup$ then $FU(b)=F(sup_{F(a)\leq b}(a))=sup_{F(a)\leq b}F(a)\leq b$ and hence if $a\leq U(b)$ then $F(a)\leq FU(b)\leq b$. Hence $F(a)\leq b$ if and only if $a\leq U(b)$. Notice that we used preservation of sups only for the sups in the definition of $U$.<br /><br />There is a straightforward extension to categories of this result, namely a functor $F:A\to B$ has a right adjoint if and only if for each $b\in B$ the colimit $colimit_{Fa\to b}a$ exists and is preserved by $F$. I have been imprecise about the diagram of the colimit - we will see a definition below. Let's use this result to prove the adjoint functor theorem.<br /><br />What we need to show is if $A$, $B$ are locally small and $A$ is totally complete then $colimit_{Fa\to b}a$ is the colimit of a discrete fibration with small fibres. <br /><br />Notice that the fact that $A$ is totally complete suggest a possible adjoint to $F$, namely <br />$B\xrightarrow{yoneda_B}PB\xrightarrow{PF}PA\xrightarrow{colimit}A$ <br />where by $PA$ I mean $Sets^{A^{op}}$.<br />But $PF(yoneda_B(b))=B(F(-),b))$ so the suggested formula for $U(b)$ is the colimit of the discrete fibration $Elements(B(F(-),b))\to A$ corresponding to the functor $B(F(-),b):A^{op}\to Sets$.<br /><br />What is this discrete fibration? Over $a\in A$ the fibre is $B(Fa,b)$, that is consists of arrows $Fa\to b$. Over $\alpha:a_1\to a_2$ there are arrows $(f:Fa_1\to b)\to (g:Fa_2\to b)$ exactly when $g F(\alpha) =f$. But this is exactly the diagram I intended for $colim_{Fa\to b}(a)=U(b)$. Certainly this colimit exists and we have the theorem.<br /><br />I am busy today, so I will give examples of the use of this theorem in producing constructions in lex total categories next time.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-75420892542516678912010-05-17T23:54:00.000-07:002014-08-17T06:46:47.706-07:00HypothesesI notice that John Baez has realized a danger of the new scientific order of blog posts, blog comments, labs, mailing lists, expository-style papers, preprints in arxiv, and traditional scientific journal articles.<br /><br />From a comment on the May 15, 2010 post of <a href="http://golem.ph.utexas.edu/category/">n-category cafe</a> (my italics)<br />"This Week’s Finds and my seminar notes are packed with hunches of varying caliber. Sometimes you need to read between the lines a bit to see them — for example, if I say something ‘should’ be true, it means I believe it’s true but haven’t proved it. And sometimes, <span style="font-style: italic;">I’ve said something is true even though I haven’t proved it</span>. By now I realize this is a bad habit… thanks to the following story.<br />Once I went to a talk where somebody said that for any ring R there’s a one-object tricategory Alg(R) consisting of R-algebras, bimodules and bimodule morphisms. I said “Really? Do you know if anyone has ever written that up?” And the speaker said “Sure! It’s in This Week’s Finds!” Which galled me, because <span style="font-style: italic;">while I knew it was true</span>, I’d never seen a proof written up — and I realized then that by claiming it was true in This Week’s Finds, I’d reduced the chances of ever seeing a proof."<br />Another remark:<br />"The last page of my slides was supposed to summarize only the ideas that will surprise and thrill the audience, so I get a standing ovation."<br /><br />I find the current state of n-category theory a confusing mixture of conjecture, unfinished definitions, unfinished proofs, buts lots of exposition.<br /><br />I am old-fashioned and believe that to maintain integrity we must insist that the science exists in the scientific journal articles. Credit should be assigned on the basis of published scientific articles. <br />I don't know what it means to have expository articles of an vaguely developed field.<br /><br />Perhaps we will end with Bogdanovs.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-15018100474322153302012-10-01T08:01:00.003-07:002014-08-17T06:43:52.280-07:00Verba volant, scripta manentWhere do the internet, the blogs, digital photographs, the cloud and internet archives fit in?Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-81419325976500468982013-03-04T22:24:00.001-08:002014-08-17T06:43:10.299-07:0070I am seventy years old. Time to write memoirs.<br /><br />5th March 2013Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-71951879070905476892013-09-26T04:25:00.002-07:002014-08-17T06:41:53.501-07:00Ross Street's OrientalsI recently had a look at one of Jacob Lurie's expositions (in 111 pages) on<a href="http://www.math.harvard.edu/~lurie/papers/cobordism.pdf"> the clasification of topological field theories</a> - very briefly and with little comprehension. I find it difficult to read mathematics in which not even formal definitions are given: Lurie himself says "In many instances, we have not attempted to give precise definitions, let alone careful proofs".<br /><br />I have always been interested in lower category theory (up to symmetric monoidal bicategories) but only once have ventured beyond. Always we had the idea that natural transformations were like homotopies, and that one could consider homotopies between homotopies and so on, and so one would get higher categories. Thinking about axiomatizing higher categories, in particular higher associative laws lead Ross Street to considering the free n-category on an n-simplex. Mike Johnson and I also worked on this problem.<br /><br />Following this<a href="https://dl.dropboxusercontent.com/u/92056191/Archive/temporary_new_material/Orientals.html"> link</a> there is an early picture produced by Ross of an oriental.<br /><br /><br />This seems to me still an interesting problem but I don't see where it has gone in Joyal and Lurie's work.<br /><br />In any case infinity-category theory seems to be a distinct subject from category theory. Category theory should resist the attempted takeover.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com1tag:blogger.com,1999:blog-7446594.post-88958677360042116692010-04-01T10:02:00.000-07:002014-08-15T10:46:11.030-07:00Heat on the netCommunication can become very heated on the net. I remember my first experience of it years ago when a round of emails between me, Max Kelly and Ross Street spiralled, in the course of a couple of hours, in aggression. The only way out of it was to stop writing, and start again later. (We successfully maintained a seminar, with others, which began in 1970, was still active when I left Sydney in 1998, and remains today one of the principal seminars in category theory.)<br /><br />I was reminded of this while thinking of the 20 year success of the Categories List. One needs to be conscious of the new mode of communication in order to produce a stable and productive group communication. I receive other lists which contain nothing but announcements of conferences and of jobs, whereas the categories list has managed to sustain mathematical discussion.<br /><br />I found an interesting page about <a href="http://www.aaup.org/AAUP/pubsres/academe/2007/JF/Feat/Lear.htm">moderated academic lists</a>. I quote one paragraph:<br />"One lesson I learned along the way is that the ecology of an active online community is surprisingly fragile. It can go wildly out of whack, and even self-destruct, in a very short time. I’ve seen any number of lists decline and die over the years. Some merely waste away through attrition and neglect until no one posts <br />to them anymore; others become mere notice boards for calls for papers and the like, without any real interaction among subscribers. A few flame out spectacularly, bursting with so many nasty, off-topic messages that all the “lurkers” unsubscribe, leaving the <br />disputants to fight among themselves until even they grow weary of it. Charting a middle course means achieving a high “signal-to-noise” ratio, in which the list’s content is useful enough to make it worth people’s time to stay involved." <br /><br />Another thing I learnt looking around is the meaning of "internet troll". From <a href="http://en.wikipedia.org/wiki/Internet_forum">wikipedia</a> a quote:<br />"Forum trolls are users that repeatedly and deliberately breach the netiquette of an established online community, posting inflammatory, extraneous, or off-topic messages to bait or excite users into responding or to test the forum rules and policies, and with that the patience of the forum staff. Their provocative behavior may potentially start flame wars or other disturbances. Responding to a troll's provocations is commonly known as 'feeding the troll' and is generally discouraged, as it can encourage their disruptive behavior."Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-4451816655428631272010-05-23T23:43:00.000-07:002014-08-15T10:45:35.202-07:00Probability 2 - Boy or Girl paradoxI continue to be amazed by the fact that apparently simple probability problems are often so difficult to resolve. It is particularly surprizing in view of the fact that so much science seems to depend on probability these days: the particle physicists' results are small probabilistic anomalies in incredibly expensive experiments, quantum physicists say that nature is inherently probabilistic, the medicine we take is determined by 'random' trials, evolution apparently occurs by 'random' mutation, etc, etc. (I don't necessarily concur with all this, by the way.) <br /><br />I have been thinking more about this fact since the <a href="http://cameroncounts.wordpress.com/2010/05/02/probability/">blog post of Peter Cameron</a>, which derived from another <a href="http://go2.wordpress.com/?id=725X1342&site=cameroncounts.wordpress.com&url=http%3A%2F%2Falexbellos.com%2F%3Fcat%3D1%26paged%3D2&sref=http%3A%2F%2Fcameroncounts.wordpress.com%2F2010%2F05%2F02%2Fprobability%2F">post by Alex Bellos</a>. The particular problem they consider is this: if someone tells you that they have two children at least one of whom is a boy born on Tuesday, what is the probability that the second child is a boy?<br /><br />Actually, the simpler version of this problem in which there is no mention of the day of birth (possibly introduced by <a href="http://en.wikipedia.org/wiki/Martin_Gardner">Martin Gardner</a> who died 22nd May), has been <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox">discussed and disputed</a> in short and in long, in circles high and low, under the name "The Boy or Girl Paradox".<br /><br />In short, my view is that the answer to this question is 1/2 (under the simplifying assumptions clearly intended; that is, that the birth rates of boys and girls are equal, etc).<br />However a closely related problem which I will explain would yield the answer 1/3, the answer preferred by mathematicians.<br /><br />I think this simple problem benefits from the distinction between states and actions, the states being the quality of being a girl or a boy, and the actions being the declarations made. In the real world problem you need to consider the probability of the action as well as the probability of being in a state.<br /><br />Let's now consider the problem in detail: the simplifying assumptions mentioned above mean that given a family with two children the following four cases are equally likely: BoyBoy, BoyGirl, GirlBoy, GirlGirl. In the case BoyBoy the likelihood of a declaration that there is a boy is 1, in the case of GirlGirl the likelihood of a declaration that there is a girl is 1, but in the other two cases there is a half chance that the declaration is 'boy', and a half chance that the declaration is 'girl'. Now let's look at the total weight of a declaration 'boy': it is 1+1/2+1/2. The weight which corresponds to the case BoyBoy is 1. Hence the proportion of 'boy' answers which correspond to the state two boys is <br />1/(1+1/2+1/2)=1/2. <br /><br />Now a slightly different problem: suppose I interview people with two children, and ask if they have a boy in the family; if they answer yes, what is the probability that the second child is a boy?<br />Again the simplifying assumptions mentioned above mean that given a family with two children the following four cases are equally likely: BoyBoy, BoyGirl, GirlBoy, GirlGirl. In the case BoyBoy the likelihood that they respond 'yes' is 1, in the case of GirlGirl the likelihood that they respond 'yes' is 0; in both the other two cases the likelihood of 'yes' is 1, not 1/2. Now let's look at the total weight of a response 'yes': it is 1+1+1. The weight which corresponds to the case BoyBoy is 1. Hence the proportion of 'yes' answers which correspond to the state two boys is <br />1/(1+1+1)=1/3. <br /><br />If there is also mention of the day of birth there are at least three interpretions:<br />1) a person declares that they have two children at least one a boy born on Tuesday;<br />2) when asked if they have a boy, the response is 'yes, and born on Tuesday';<br />3) when asked if they have a boy born on Tuesday, the response is 'yes'.<br /><br />The probability of the second child being a boy in the first case is 1/2, in the second <br />1/3 and in the third 13/27.<br /><br />In this post I have tried (exaggeratedly) to argue in as simple language as possible, because a simple problem should only require the appropriate distinctions, not a big machinery. However I have earlier proposed (with Sabadini and de Francesco Albasini) a <a href="http://arxiv4.library.cornell.edu/abs/1005.0949">mathematical context</a>, in which many of the perplexities are seen to arise from considering (normalized) probabilities, rather than weights of actions. The precise mathematical point is that normalization does not behave well with respect to sequential operations (which include abstraction). Another simple perplexing example where abstraction does not work well with normalization is Simpson's paradox, the mathematical origin of which is the fact that not always a/b+c/d=(a+c)/(b+d).<br /><br />An extreme example of Simpson's paradox is the following:<br />Consider treatments A,B and diseases X,Y.<br />Treatment A with disease X on one person cures the person (100% cure rate - best possible)<br />Treatment B with disease X on 99 people cures 98 (worse than 100%).<br />So A has a better success rate than B with disease X.<br /><br />Treatment A with disease Y on 99 people cures 1 person (this is better than 0% cure rate)<br />Treatment B with disease Y on one person fails to cure (0% cure rate - worst possible).<br />So A is better than B with disease Y.<br /><br /><br />In both diseases X and Y the treatment A is better than B.<br /><br />However treatment A saves 2 people in 100, B saves 98 in a hundred.<br />B is worse than A???Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com6tag:blogger.com,1999:blog-7446594.post-61935112413282262002010-05-06T08:49:00.000-07:002014-08-15T10:44:56.602-07:00ProbabilityI made the suggestion recently that the Kolmogorov presentation of probability was too abstract, and that this was the cause of elementary confusions like those common in the Monty Hall problem. <br /><br />I must admit to being quite unsure of the foundations of probability theory. The fact that simply stated problems are easy to make errors about hints to me that the gap between theory and practice is too wide. I make some tentative remarks below, which are made more formal in a <a href="http://arxiv.org/abs/1005.0949">small paper</a> we have just written.<br /><br />1. First, I think probability should be about explicitly described systems. (Peter Cameron in his <a href="http://cameroncounts.wordpress.com/2010/05/02/probability/">blog post</a> describes two possible systems ("protocols") which might be behind a particular problem, yielding different results.) This would imply that one needs a notion of system. In this context probabilities are weightings of actions in a particular state of a system. Such a weighting represents information about the cause of actions. It is a kind of primitive dynamical information.<br /><br />2. The Kolmogorov view is that independence is defined in terms of the probabilities of events. This is the attitude that we are observing behaviours, rather than considering systems. For systems there is a clear operation of composition in parallel. Actions may be appear to be independent behaviourally without being actually parallel.<br /><br />3. The fact that the sum of all probabilities must be 1 is another evidence of a behavioural view. A system may have big reasons for making a choice, or small reasons, in both cases, say, with a half and half probability. This doesn't affect observed behaviours. However in composing systems large reasons in one system may overwhelm small reasons in another.<br /><br />To make this more concrete: suppose I am in a situation where I am deciding to eat an apple or a pear (with equal interest), and suddenly I perceive that a car is driving straight towards me, and that with equal choice I must decide whether to run to the left or right to avoid an accident. This is a state in a composition of two systems. It is clear that the weight behind choosing an apple or a pear is overwhelmed by the choice between jumping left or right.<br /><br />At a system level weight is more fundamental than probability, which arises as a normalization of weight.<br /><br /><span style="font-weight: bold;">Update:</span> I see now that Peter Cameron has come to a conclusion (which I quote in full, but with my italics; it refers to a particular calculation which you can see at his <a href="http://cameroncounts.wordpress.com/2010/05/02/probability/">blog</a>):<br />"Mathematically the conditional probability that someone has two boys, given that one of their two children is a boy born on Tuesday, is 13/27. If we started defining the probability of some event in terms of the algorithm that led to the statement being made, all our textbooks would need to be rewritten from the ground up. <span style="font-style: italic;">I think the best way to proceed is to say</span>, we do know how to calculate probability (and how to interpret it), but this requires careful thought, and sometimes our intuition lets us down."<br /><br /><span style="font-weight: bold;">Update:</span> I made a too brief comment on Peter Cameron's blog, namely that<br />"I agree with John Faben [another commenter] that the calculation of probability requires information about the algorithm involved."<br /><br />It is somewhat difficult in blogs to follow which comments are later referred to, but it seems that Peter replied to my comment with the following:<br />"Sorry to have to disagree…<br /><br />Any calculation in probability can be done unambiguously by the rules (based on Kolmogorov’s axioms) provided we specify carefully what is the “sample space” and what is the probability measure on the sample space. If you start bringing in other factors like the algorithm used to generate a statement then all you are doing is changing the measure. That is why I said in my example that I am a covert Bayesian. I happen to think that the probability measure that applies in a given situation depends on everything I know about the situation (which may include information about the algorithm used to generate some statement)."<br /><br />He wrote more which I do not reproduce here.<br /><br />I have been trying to understand why is it difficult to apply the existing theory (Kolmogorov) to apparently simple real world problems, a fact admitted by Peter. (There is a rumour that even Erdos had difficulty with Monty Hall.)<br /><br />My suggested answer to this is that there is too great a gap between real world problems and the theory, and perhaps there could be a more detailed theory in between reducing the gap.<br /><br />The more detailed theory would be prior to the construction of the probability space.<br />It would consist in making more precise the real world problem, by describing a mathematically formal system (algorithm, or protocol). This is not beyond the capacity of mathematics to do.<br /><br />Peter seems to agree when he says "everything I know about the situation", which is another way of saying the precise system under consideration.<br /><br />There is a sleight of hand in the statement "If you start bringing in other factors like the algorithm used to generate a statement then all you are doing is changing the measure". The algorithm is needed to determine the measure; different algorithms determine different measures.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com1tag:blogger.com,1999:blog-7446594.post-18458336497863429232014-08-01T02:35:00.004-07:002014-08-15T10:37:19.916-07:00Span(Graph) I<a href="http://rfcwalters.blogspot.it/2014/08/spangraph-ii-circuits-with-feedback.html">Next Span(Graph) post</a>.<br /><br /><br />I have decided to write a series of tutorial posts about Span(Graph) since I think this work has been unfairly neglected. I am simultaneously introducing a little TeX into the blog, at the encouragement of Keith Harbaugh.<br /><br />We begin with a simpler category as an algebra of straight line circuits which we will use as motivation for then in a later post introducing ${\bf Span}({\bf Graph})$.<br /><a name='more'></a><br /><br /><b>Straight-line Boolean circuits.</b><br /><br />Straightline Boolean circuits are constructed from components $and$, $or$, and $not$, connected by wires but without feedback. An example:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-H_kGkT8MfCE/U9t_sxC5wpI/AAAAAAAAByU/sLfAMRV-MVc/s1600/circuit.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-H_kGkT8MfCE/U9t_sxC5wpI/AAAAAAAAByU/sLfAMRV-MVc/s1600/circuit.jpg" height="134" width="320" /></a></div><br />There is an algebra in which such a circuit may be described as an expression. The algebra is a category with extra operations.<br />The objects of the category are sets of the form $B^n=\{0,1\}^n$, the arrows of the category are all the functions between these sets, composition is composition of functions. We may represent an arrow in the category $B^m\to B^n$ as a box with $m$ input wires and $n$ output wires.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-gSX0vRn2yMo/U9vfwit8YuI/AAAAAAAAByk/YeW-FwRX72k/s1600/component.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-gSX0vRn2yMo/U9vfwit8YuI/AAAAAAAAByk/YeW-FwRX72k/s1600/component.jpg" height="100" width="200" /></a></div><br />Already in the picture above I have represented the basic components $and$, $or$ and $not$ as (special shaped) boxes with input and output wires.<br /><br />Now composition in the algebra corresponds to joining all the output wires of one component to all the input wires of the next. The algebra has another important operation and some constants.<br /><br />The product of arrows: Given $f:B^m\to B^n$ and $g:B^k\to B^l$ the product of $f$ and $g$, denoted $f\times g$ is an arrow from $B^{m+k}$ to $B^{n+l}$ which applied to an $m+k$ tuple $(x_1,x_2,\cdots,x_m,y_1,\cdots,y_k)$ gives $(f(x_1,x_2,\cdots,x_m),g(y_1,\cdots,y_k)) $. The picture of $f\times g$ is that of $f$ and $g$ side by side. In the case $m=2$, $n=3 $, $k=3 $, $l=1 $ we have $5$ inputs and $3$ outputs:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-of08d8ZRHbA/U9vf5Y4ax7I/AAAAAAAABys/XPnEJeMuK3c/s1600/product.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-of08d8ZRHbA/U9vf5Y4ax7I/AAAAAAAABys/XPnEJeMuK3c/s1600/product.jpg" height="177" width="200" /></a></div><br />The extra constants are operations on wires that allow the circuit to be built up. They are $1_B:B\to B$ (a single wire), $twist:B^2\to B^2$ (a twist of wires), $\Delta:B\to B^2$ (splitting of a wire) and the unique function $B\to B^0=1$ (the cutting of a wire). As functions the are defined by $1_B(x)=x$, $twist(x,y)=(y,x)$, $\Delta(x)=(x,x)$.<br /><br /><br />They are pictured as:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-aRrEcHkNCBQ/U9vgDfHdGoI/AAAAAAAABy0/x6Q9HZYprFE/s1600/constants.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-aRrEcHkNCBQ/U9vgDfHdGoI/AAAAAAAABy0/x6Q9HZYprFE/s1600/constants.jpg" height="54" width="320" /></a></div><br /><br />Now consider dividing up of the circuit above as indicated in the picture below. The circuit is a composite of the parts between the dotted lines, each of which parts may be expressed using product in terms of the basic components and the constants.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-wQUQpRQI2o8/U9vgJLNQb-I/AAAAAAAABy8/oQWlI5pPEcc/s1600/circuit2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-wQUQpRQI2o8/U9vgJLNQb-I/AAAAAAAABy8/oQWlI5pPEcc/s1600/circuit2.jpg" height="160" width="320" /></a></div><br /><br />The algebraic expression for the circuit is the composition of first $\Delta\times 1_B\times 1_B$, then $\neg\times twist \times1_B $, then $\vee\times \& $ and finally $\&$, that is (composing from the left)<br /><br />$(\Delta\times 1_B\times 1_B)(\neg\times twist \times 1_B )(\vee\times \&)(\&)$.<br /><br />The evaluation of this expression in the category yields a function $B^3\to B$ which is the input/output behaviour of the circuit. In fact the input/output function of this circuit is $f(x,y,z)=x\& (y\& z)$ and so the circuit is equal to a simpler circuit with two $and$ gates.<br /><br />There are various other things that could be said at this point but I will mention only briefly that (i) every arrow in this category is the input/output behaviour of a circuit, (ii) the extra operations and constants are exactly those of a category with products, (iii) the equations between expressions are exactly those which are valid in a category with products, plus the Boolean algebra equations satisfied by the basic components.<br /><br />The category we have described does not allow components with internal state, delay, feedback. It is to introduce these elements that we will introduce in the next post ${\bf Span}({\bf Graph})$.<br /><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com4tag:blogger.com,1999:blog-7446594.post-1900396719055053412014-06-04T03:11:00.001-07:002014-08-14T13:08:51.567-07:00Steve Bloom's treeWhen Steve died in 2010 we said we would plant a tree in his memory at our mountain house. Here it is:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ANb-hlj2dvw/U47w1AgbS5I/AAAAAAAABt4/oY-lYYbWAPQ/s1600/2014-06-02+19.20.26.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ANb-hlj2dvw/U47w1AgbS5I/AAAAAAAABt4/oY-lYYbWAPQ/s1600/2014-06-02+19.20.26.jpg" height="300" width="400" /></a></div><br />Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-51182685926174106322010-08-11T06:01:00.000-07:002014-08-14T11:51:21.090-07:00Clive Selwyn Davis, 15 April 1916 – 29 October 2009I am rather out of touch with Australian mathematics and so I have just found out by chance that my MSc supervisor Clive Davis died in October last year. There is an <a href="http://www.austms.org.au/Publ/Gazette/2010/Mar10/ObitDavis.pdf">obituary</a> in the Australian Mathematical Society Gazette.<br />I learnt number theory from him, and remember well "the clarity, vigour and high standard of his lectures. His handwriting on the board was immaculate, and his brief<br />dictated final summary of each lecture was much appreciated." (quote from the obituary)<br />I wrote a Master's thesis on continued fractions in 1966 with him, and eventually even wrote a small book on number theory.<br />At that time, if I remember correctly, the first PhD in mathematics at the University of Queensland had yet to be awarded, so I went to Bernard Neumann's department at the ANU in Canberra, and changed direction, with Hanna Neumann as my supervisor. I had considered continuing number theory with Kurl Mahler, but my impression was that he was not very keen on having students.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-49479129512082100702012-11-29T05:28:00.000-08:002014-08-14T11:49:47.256-07:00Videos of Lawvere's lectures January 2008, Como, ItalyI have put up all the videos of Bill's lectures in the Sala dei Nobel, Como, Italy, January 2008, at the following address (in the Como Category Theory Archive):<br /><a href="http://comocategoryarchive.com/Videos/videos.html">http://comocategoryarchive.com/Videos/videos.html</a>Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-37595682074261965592011-07-21T08:56:00.000-07:002014-08-14T11:48:32.459-07:00FranceI didn't go to the <a href="http://www.pims.math.ca/scientific-event/110717-ictc2">annual category theory</a> meeting, currently occurring in Vancouver - I am beginning to lose enthusiasm for intercontinental travel. I don't know how many times I have flown from Australia to Europe or America - too many times. And the internet has changed the need for so much travel. I do hope the organizers put up slides of the talks, and I would of course liked to have met old friends.<br /><br />Instead I made a car trip to France<br /><a name='more'></a>, a holiday, with my wife and daughter. We left Italy by Gottardo Tunnel, going to <a href="http://en.wikipedia.org/wiki/Hospices_de_Beaune">Beaune</a>, <a href="http://en.wikipedia.org/wiki/Chartres_Cathedral">Chartres</a>, Paris for a few days, then <a href="http://en.wikipedia.org/wiki/C%C3%B4te_de_Granit_Rose">Brittany</a> for a few days - including <a href="http://en.wikipedia.org/wiki/Carnac">Carnac</a>, returning through the <a href="http://en.wikipedia.org/wiki/Ch%C3%A2teau_d'Azay-le-Rideau">Loira Valley</a>, <a href="http://en.wikipedia.org/wiki/Cath%C3%A9drale_Saint-%C3%89tienne_de_Bourges">Bourges</a>, <a href="http://en.wikipedia.org/wiki/Royal_Monastery_of_Brou">Bourge-en-Bresse</a> and the Monte Bianco Pass. <br /><br />In terms of art and architecture the late 20th century comes out very badly in comparison with <a href="http://www.louvre.fr/llv/oeuvres/detail_departement.jsp?FOLDER%3C%3Efolder_id=1408474395181077&CURRENT_LLV_DEP%3C%3Efolder_id=1408474395181077&FOLDER%3C%3EbrowsePath=1408474395181077&bmLocale=en">Egypt</a>, the <a href="http://en.wikipedia.org/wiki/The_Lady_and_the_Unicorn">middle ages</a>,<br />and renaissance <a href="http://www.louvre.fr/llv/activite/detail_parcours.jsp?CONTENT%3C%3Ecnt_id=10134198673226925&CURRENT_LLV_PARCOURS%3C%3Ecnt_id=10134198673226925&bmLocale=en">Italy</a>.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-76307635423660081712011-06-24T10:51:00.000-07:002014-08-14T11:47:49.440-07:00Ugo Montanari's talk 22 June 2011on the occasion of a celebration in Milan in honour of Gianni Degli Antoni.<br /><br />Ugo spoke about his latest work about a compositional calculus for Petri nets using the wire calculus of Sobocinski.<br /><br />I was disappointed that he didn't mention our work on connectors, which we have spoken about for years, including in joint projects, and which goes back to 1997 or even 1987.<br /><br /><a name='more'></a>In particular because the work of Sobocinski derives from ours. It arose in discussions with me (he even agreed that it was a joint project at the time), and lectures given by me, one at CT 2008 in Calais, where I emphasized the important difference between the parallel communication in Span(Graph) and that in conventional process algebras such as CSP and CCS. The <a href="http://www-lmpa.univ-littoral.fr/CT08/slides/Walters.ppt">slides are available at the CT site</a>.<br />Sobocinski actually carried out a student project with me in 1997 at the University of Sydney on Span(Graph) and was present at our talks at AMAST 1997 in Sydney, one of which was on representing Petri nets in Span(Graph). He refers to both our AMAST papers in his article in CONCUR 2010, including the following quote:<br />"The operations of the calculus presented in this paper are fundamentally different to those utilised in the aforementioned literature. Indeed, they are closer in nature to those of tile logic [13] and Span(Graph) [18] than to the operations of CCS. More recently, similar operations have been used by Reo [2], glue for component-based systems [8] and the wire calculus [30]. Indeed, in [17] Span(Graph) is used to capture the state space of P/T nets; this work is close in spirit to the translation from nets to terms given in this paper."<br /><br />I might say that even Arbab was influenced by us. He visited us in Como in 2000 and was talking about Manifold. I told him his connectors were not compositional. I believe his subsequent work on Reo was influenced by our discussions. Our paper in COORDINATION 2000, prior to Reo, used compositional connectors in modelling IWIM in CospanSpan(Graph).<br /><br />The point of this post is not casual.<br />The main contention in all our work is that concurrency theory has taken a wrong direction for thirty years, because of the choice of the wrong basic operations. We have said this in many forums including IFIP WG1.3 without apparently any reaction. For some more detailed comments see <a href="http://rfcwalters.blogspot.com/2009/06/old-posts-5-hundred-errors-of-ccs-begun.html">this post</a>.Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0tag:blogger.com,1999:blog-7446594.post-27550394358305492732009-10-03T00:46:00.000-07:002014-08-14T11:47:10.290-07:00Aesop on the importance of having decisive leadershipLiving in Italy, in particular working in Italian academia, makes one think frequently of the problem of good government. There are <a href="http://66.71.178.156/materiali%20didattici/De%20Benedictis%202004/buongov.htm">famous frescoes</a> of Lorenzetti in the Comune di Siena called "ALLEGORIA DEL BUONO E CATTIVO GOVERNO".<br /><br />We are now at a critical point in the University of Insubria in the discussion of University reform. It looks as if we may finally have to implement the 25% reduction in courses proposed by the minister (for universities which until now had the minimum number of teachers) - though Italians tell me that I am naive to imagine such a definite result. Some of us believe that decisive leadership is crucial at this time. I would prefer wise leadership. Maybe even benevolent leadership.<br /><br />I suppose that is also naive. Consider the <a href="http://it.wikisource.org/wiki/Il_Re_Travicello">following story</a>. <br /><br /><br />THE FROGS WHO WISHED FOR A KING<br /><br />The Frogs were tired of governing themselves. They had so much freedom that it had spoiled them, and they did nothing but sit around croaking in a bored manner and wishing for a government that could entertain them with the pomp and display of royalty, and rule them in a way to make them know they were being ruled. No milk and water government for them, they declared. So they sent a petition to Jupiter asking for a king.<br /><br />Jupiter saw what simple and foolish creatures they were, but to keep them quiet and make them think they had a king he threw down a huge log, which fell into the water with a great splash. The Frogs hid themselves among the reeds and grasses, thinking the new king to be some fearful giant. But they soon discovered how tame and peaceable King Log was. In a short time the younger Frogs were using him for a diving platform, while the older Frogs made him a meeting place, where they complained loudly to Jupiter about the government.<br /><br />To teach the Frogs a lesson the ruler of the gods now sent a Crane to be king of Frogland. The Crane proved to be a very different sort of king from old King Log. He gobbled up the poor Frogs right and left and they soon saw what fools they had been. In mournful croaks they begged Jupiter to take away the cruel tyrant before they should all be destroyed.<br /><br />"How now!" cried Jupiter "Are you not yet content? You have what you asked for and so you have only yourselves to blame for your misfortunes."Robert Waltershttp://www.blogger.com/profile/11168659017114792898noreply@blogger.com0