FQXi announced the prizes for the Essay Contest on the Nature of Time. Results are here.
My essay was not awarded.
An interesting interview with Carver Mead, author of the (unconventional) Collective Electrodynamics: Quantum Foundations of Electromagnetism.
THE SCIENCE AND PHILOSOPHY OF UNCONVENTIONAL COMPUTING (SPUC09)
Cambridge (UK), March 23-25, 2009
SECOND CALL FOR PAPERS
We welcome submissions on topics normally classified under ‘natural computing’ or ‘unconventional computing’ or ‘hypercomputing’ including (but not restricted to) quantum computing, relativistic computing, biology-based computing, analogue computing, and also submissions on the philosophical implications of these new fields for topics including (but again not restricted to) philosophy of mind, philosophy of mathematics, the Church-Turing thesis.
Each presentation should last no more than 30 minutes; a further 10 minutes will be allowed for discussion.
Those wishing to make a presentation should submit by email a 250-word abstract of their paper to Mark Hogarth (firstname.lastname@example.org); enquiries to the same.
Registration fee (yet to be fixed) will be around £100.
Student bursaries are available.
Conference website: http://web.mac.com/mhogarth/Site/SPUC_Conference.html
Mark Hogarth (Cambridge, UK)
CONFIRMED INVITED SPEAKERS
Selmer Brinsjord (New York, USA))
Jeff Barrett (Irvine, USA)
Philip Welch (Bristol, UK)
Tim Button (Harvard, USA)
Cristian Calude (Auckland, New Zealand))
István Németi (Budapest, Hungry)
Benjamin Wells (San Francisco, USA)
Hajnal Andréka (Budapest, Hungry)
Apostolos Syropoulos (Xanthi, Greece)
Susan Stepney (York, UK)
Bruce MacLennan (Tennessee, USA)
Peter Kugel (Boston, USA)
Mark Sprevak (Cambridge, UK)
Selim Akl (Kingston, Canada)
José Félix Costa (Swansea, UK)
Mike Stannett (Sheffield, UK)
John Tucker (Swansea, UK)
Barry Cooper (Leeds, UK)
Sponsored by EPSRC through HyperNet (the Hypercomputation Research Network, EP/E064183/1)
I am writing to ask for your assistance in drawing the attention of exceptional, highly motivated students to the Perimeter Scholars International (PSI) program.
PSI is an innovative, Masters level course designed to prepare students for cutting-edge research in theoretical physics. It provides a broad overview, allowing students to choose their preferred specialisation, and extensive tuition in formulating and solving interesting problems.
The due date for applications is February 1st: applications received after this date may still be considered but only as long as places remain available.
A number of outstanding lecturers have already signed up to teach, including for example Yakir Aharonov, Phil Anderson, Matt Choptuik, Nima Arkani-Hamed, John Cardy, Ruth Gregory, Michael Peskin, Sid Redner, Xiao-Gang Wen, and a number of Perimeter Institute research faculty. They will be supported by full-time tutors dedicated to the course.
All accepted students will be fully supported.
For further details, see www.perimeterscholars.org.
Thank you in advance for helping us to make this exciting opportunity known as widely as possible.
With my best wishes,
Perimeter Institute for Theoretical Physics
Waterloo, Ontario, Canada
I have submitted an essay to the FQXi competition. If you are interested in reading it, click here.
Title: On the Nature of Time – Or Why Does Nature Abhor Deadlocks?
This essay aims at introducing a novel point of view on the nature of time, inspired by a synthesis of three seemingly unrelated concepts: Bergson’s notion of duration, Dijkstra’s notion of concurrency, and Mach’s notion of inertia.
Edit (June 9th 2009): Apparently, the essays on the nature of time are no longer available at the FQXi site. I have made a very few small corrections and modifications in my essay and a new version is available here (pdf file).
Some of the talks from the 15th meeting on the Foundations of Physics (held in Leeds in March 2007) are available to download as video (mp4 format).
In quantum gravity, the notion of a spacetime event (point) breaks down nearby the Planck scale. We would like to replace the notion of a point with the notion of a “smeared” point, or more precisely, an open set of concurrent processes. The spacetime foam would be represented by a coherent superposition of sets of processes that act concurrently, and the reason that they act concurrently lies in the fact that they superpose each other at certain extent. (Note that such a superposition is not in space, because the processes themselves should represent a relational framework in which spacetime emerges at the classical level). It is a superposition constrained by causality. Causality should somehow represent the local “shared resource” among the coherent set of processes. That is why they act concurrently: they are causally constrained. A quantum state of a volume spacetime can only evolve as the result of the combined act of the fundamental processes that compete for a common resource: causality itself.
Each set of concurrent processes can be represented by a ditopological[*] space, and each ditopological space is partially mapped into some other because of the local causality constraint. The ditopological spaces and their common (causal) superpositions maps (the abstract shared resources) should have a combined effect of correlations and anti-correlations among processes (that is, allowed and forbidden regions in the ditopological spaces at their common superpositions), and these correlations/anti-correlations should have a correspondence with, for instance, a discrete spectrum of the volume operator, expected intuitively for a quantum spacetime foam (and calculated precisely in LQG).
Let us attempt a somewhat primitive reinterpretation of a quantum state as a set of processes.
You can see a process as a function that gives some output from a given input. It’s somewhat like an unitary operator. A given particle state evolves to another as a combined result of a set of many processing, “internally active elements”. Imagine a qubit. A set of quantum states encompassing any possible state between |0> and |1> is here what I call a set of “fundamental processes”.
Internally, a lot of activity is happening to the qubit. The processes are not simply independent agents, but active elements which share finite mutual “resources”. I’m not sure at the present stage what to make of these “shared resources” in such a reinterpreted quantum theory (in LQG, they could be seen as the edges of a spin network: nodes *share* common edges representing spin, so the evolution of spin network states could be seen as a result of how these processes act and share resources, and in the present case is to maintain gauge invariance). In other words, the processes do not act completely independently, but are concurrent in the sense that they need to access some shared resources in order to evolve.
I’m not sure what to do with when you observe the system in such a framework.
Imagine now a n-dimensional space in which every orthogonal axis represents a process. Every point along an axis is a representation of a given quantum state (input) evolving to another (output). For instance, in one axis you could set up the following “scheduling”:
|0> -> |1/sqrt(2)> -> |1> -> |0> -> …
in another axis, this one:
|1/sqrt(2)> -> |1> -> |0> -> |1/sqrt(2)> -> |1> -> …
and so on, so you see there is a quite a large number of possible schedules for the qubit. But, say, two processes could not be at the same “time” sharing the same resources: this translates to some constraint that represents the forbidden usage of (or action upon) the same common resource by different processes which are competing at the same “time” (here, “time” is also to be interpreted in some partially ordered sense).
All possible scheduling (histories) of each of the processes form, combined, a directed topological manifold that encodes *all* the possible histories of a given particle. “Directed” in the sense that there is a local partial order structure imposed on the manifold as the quantum states evolve (a direction of “time”). A point in this manifold represents a superposition of processes (state functions). But since the processes that evolve quantum states must share common resources, there are forbidden regions on the manifold because the processes cannot access the same resource at the same “time”. So there are natural constraints that must be obeyed, and these constraints determine a topological, typical signature of the quantum system in question.
These constraints (that actually forbid the system to go into some kind of “deadlock”) could be seen as correlations/anti-correlations between quantum states, thus providing an interpretation of why energy levels of a harmonic oscillator are quantized, for instance.
There is an emergent field joining topology and concurrency theory — “di”topology — that study these various ditopological manifolds, which carry this extra structure (“di”rection).
It turns out that the idea seems to be easier to grasp when you include gravity. The reason comes from the fact that the scheduling of concurrent processes can be described, as I said, in topological terms by a manifold with a local partial order, a ditopological manifold. And pictorially, spin networks (or spin foams, for what is worth) seem adequate to fit this idea in a more immediate sense because of causality issues.
But that is another story.
In our world, concurrency is everywhere. It is so ubiquitous and ordinary in our daily lives that we really do not give any conscious importance to it.
It is clear that, in a classical setting, Nature operates its constituents concurrently. Animals and plants can change, interact, evolve, disturb and be disturbed in a completely concurrent manner. For instance, living creatures can operate many sensory organs or chemical reactions independently. Inanimate objects can be acted upon concurrently and physical phenomena also present concurrent behaviour. For example, when a ball falls in a gravitational field, as it changes its kinetic energy, many other processes are allowed to happen along its worldline (e.g., it may fall burning, etc).
All this seems ridiculously obvious. In engineering and computational design problems, the obviously concurrent behaviour of the external world, including how information is processed and transmitted, must be worked out and anticipated quite explicitly. Sequential modeling can be viewed as a special case of concurrent modeling, and it is much easier to implement. But as computational facilities evolved, a shift of paradigm from sequential to concurrent modeling started to gain importance, and we humans started to have to “think” and develop techniques to deal with concurrency. We were very used to the sequential paradigm for some time.
Physics, on the other hand, seems to have taken for granted the concurrent behaviour of the world as an obvious consequence of dynamical laws, which purpose is to describe how a given configuration of self-interacting constituents evolves, given a time span, to another one.
Indeed, the parallel evolution of a given set of independent physical elements is a very uninteresting case of concurrency, so it is no surprise that physics never considered this as a fundamental problem at all. But when the elements must interact with each other, things get really interesting. In a concurrent language, we would say that the elements would have to share some “common resource” (like a mediating fundamental field). That is when concurrency is also interesting in its whole complexity.
Is the quantum world concurrent as well? I would say yes, but in my vision, something fundamental about how nature operates its concurrent aspects emerges in the quantum regime.
If concurrency is some deeply inherent property of the world, then it is not a consequence of dynamics, but the fundamental cause of it.
I believe that quantum gravity is the best instance in which such a change of paradigm can be worked out. Quantum gravity should emerge from some internal, concurrent description of the state space.
I will opportunely post on why I believe this is the case.
[and continues from here].
Nature might abhor naked spacetime singularities as much as “quantum deadlock states”. Deadlocks necessarily occur in cyclic, symmetric configurations (circular wait condition) of processes, along with severe limitations on how they share resources (like mutual exclusion and no preemption conditions). When a deadlock occurs, no further evolution of the system state is possible.
A quantum gravity theory incorporating a concurrent evolution for the spacetime quanta should therefore have a very “slightly” broken symmetric state of internal competing dynamical actions. The macroscopic, continuum limit of such a model should lead to a highly non-symmetrical, mostly independent set of processes in which the classical spacetime should emerge.
I’ve been working on the idea that quantum systems are inherently concurrent systems (see this previous post under construction). They can be examined under directed algebraic topology (or ditopology) tools (e.g., see here, or google about it).
The most interesting problem to model is quantum entanglement: underlying this phenomenon one abstracts out a quantum state as fundamentally corresponding to a set of concurrent processes. It would be interesting to examine how to restore local realism under this general hypothesis.
One can go further into this and ask whether quantization arises from the idea that, given that quantum systems are seen as concurrent systems, there are naturally forbidden regions in the multi-dimensional state space, where each dimension represents a concurrent process, defined by the fact that these processes locally share “resources” and no two processes can “act” on the shared resources at the same “time”. Forbidden regions correspond to the discretization of nature in the quantum limit.
So nature would fundamentally be a huge deadlock avoidance system.
Update: discussions can be found in the comment section of this blog entry over at n-Cateogry café.
Update: Below follows, for the record, a cut and paste from the discussion over at n-Category café. For the detailed links, please refer to that blog entry (see link above), where the comments appeared.
Christine, I assume the application of directed algebraic topology to concurrent systems is an outgrowth of the work of Herlihy and Shavit, which was done in the late 90s and won the 2004 Gödel Prize.
You might want to communicate with Prakash Panangaden at McGill about your work. He appears to have worked, as a theoretical computer scientist, on concurrent and distributed systems and also on quantum computing, and has a strong interest in the causal structure of general relativity and its connection to abstract notions arising in computer science.
Chris W. on January 17, 2007 1:58 AM |
In terms of applications of general geometrical methods to concurrency, the idea goes back to the 70’s. See Goubault, E. Geometry and Concurrency: A User’s Guide. But you are right to cite the work of Herlihy and Shavit as an evidence of how this subject is getting a lot of attention recently, and giving rise to important developments in the field. It looks a beautiful paper, but I have never studied it in detail.Panangaden also published in gr-qc, and this is one of his most interesting papers, I guess… I don’t know how far this is getting attention, but sounds an original and promissing line of research.
Christine Dantas on January 17, 2007 11:51 AM
Hi Christine, you probably already know that Carl Petri, one of the founders of Concurrent Systems as a field of study, used to propound the view that an adequate semantics for concurrency would need to be sufficiently broad to also encompass physics.
From what I’ve been reading recently on Anima ex Machina blog he’s updated this position somewhat to a more “out-there” Universe-is-a-Petri-net kind of stance. He might be right! (as might Smolin – I saw your Amazon review!) On the same page I linked you can also find thoughts of Seth Lloyd and some others on this topic from a conference in Berlin last year.
You’re probably also familiar with Vaughan Pratt who clearly thinks very deeply on this sort of topic (although his publications can be frighteningly dense).
When I was a CS student as Glasgow they used to ask us questions like the “big simulation” argument, just to freak us out. I was never able to see why this would confuse a hardy theoretical physicist though — surely they’d be interested (in theory) in the turtle at the bottom of the tower, ie the real simulation that presumably has to simulate itself?!
I’m looking forward to seeing where the quantum lambda-calculus course goes with this kind of thing. Quantum computation seems to rely on arbitrary-precision complex amplitudes — but having gone to all the trouble of defining True and False as morphisms in the course, surely we won’t now be allowed to pull high-precision complex numbers out of the hat? If you were designing a programming language they’d need to be defined and computed themselves somehow… who “computes” these amplitudes that they’re relying on?
Allan E on January 17, 2007 2:09 AM
And according to Nielsen and Chuang,
“Quantum computation and quantum information has taught us to think physically about computation (…) we can also learn to think computationally about physics.”
This view is attractive and I believe physics is heading towards this broad idea. I do not have a clue how far this will lead us.
Concerning Petri nets, these and other classical concurrency models have been generalized to the concept of po-spaces (“po” from “partial order”). See Sokolowski, S., Directed topology and concurrency a short overview.
Christine Dantas on January 17, 2007 12:21 PM
And concerning the question in your last paragraph, I don’t know how to answer to that… There is a huge gap between a broad idea and the technical details that one has to face in order to make the idea actually work or make sense! But thanks for pointing that out, I’ll have to think about it.
Christine Dantas on January 17, 2007 12:31 PM
Let me see if I understand what you’re getting at. If there are correlations among a set of concurrent processes, they must be constrained to avoid deadlocks. Perhaps the necessary constraints induce correlations and anti-correlations among states and state transitions in the system that resemble correlations among quantum states. These correlations also imply that the system avoid parts of its state space.
You seem to be assuming discretization at the outset, insofar as the number of concurrent processes is finite, and the associated state space is finite. Should I assume that you have in mind a continuous analog of a set of concurrent processes?
Here is a simple and fairly concrete model you might want to examine. Consider the state space to include a set of n Boolean variables, and the concurrent processes to be n Boolean transition functions of two variables that yield a “subsequent” state for each of the variables. The processes (transition functions) share resources in the significant sense that each of the functions operates on two variables, at least one of which might be used as an input by another function. What would constitute a deadlock in such a system, and what is required to avoid it? Is there necessarily a stochastic component to the system’s behavior? That is, does the transition structure have to rearrange itself every so often, in a way that can’t be described deterministically? I realize this is a very sketchy problem formulation.
By the way, with respect to the relevance of algebraic topology, the set of transition functions in this simple model can be considered as defining an abstract simplicial complex; the Boolean variables are the vertices and the functions define the edges. (Remember however that these functions have some internal structure, since we have 16 distinct, albeit interrelated, Boolean functions to choose from.) My previous questions can then be posed as questions about the “moves” that can and must occur within this complex, in order to avoid certain “pathologies” in the evolution of the system. (By the way, in this light, the collection of transition functions can be loosely regarded as a “gas” of edges [1-D!] which is evolving in parallel with and necessarily coupled to a “gas” of Boolean state variables.)
One more point: The Boolean variables are of course undergoing transitions with successive “time steps”. One might ask if one can define a causal order on these transitions or “events”. That is, given one such event, can one say that it was preceded in a well-defined way by another set of events, and followed accordingly by yet another set of events? It is not immediately evident how such a description is to be derived from the transition functions, although one might plausibly expect that it should be possible.
(I have in mind here a complementary description in terms of causal sets. By the way, regarding the compatibility of discreteness based on causal sets with Lorentz invariance, see Discreteness without symmetry breaking: a theorem [1 May 2006].)
Chris W. on January 17, 2007 4:26 AM
Dear Chris W.,
Thanks for this elaborate comment. Your first paragraph summarizes well the broad idea.
And yes, one could assume a continuous “scheduling” of the processes, and also the space could be assumed to be infinite dimensional. What I think is important here is to assume a local partial order – a “po-space”… (I’m interested in ergodic moves…) Examples of such spaces are found here:
* Sokolowski, S., Classifying holes of arbitrary dimensions in partially ordered cubes. Tech. Rep. 2000-1, Kansas State University, Computing and Information Sciences, Aug. 2000. linke here.
* Raussen, M., Geometric investigations of fundamental categories of dipaths. Unpublished, 2001. (sorry, can’t find the link now).
* Gaucher, P., About the globular homology of higher dimensional automata. Cahiers de Topologie et Geometrie Differentielle Categoriques XLIII-2 (2002), 107156. link here.
And thanks for proposing a problem. I guess we all have sketchy ideas for the moment! No problem about that, in fact it is great to know that this is quite an open field for research.
Your comment has given me months to think over! Don’t have much valuable to add for the moment, but thanks a lot.
According to one of the greatest pioneers of concurrency theory, Edsger W. Dijkstra,
The first challenge for computing science is to discover how to maintain order in a finite, but very large, discrete universe that is intricately intertwined.
Remarkably, at a conceptual level, this is not much different from the challenge of constructing a consistent theory of quantum gravity, a major open problem of physics today.
According to Nielsen & Chuang,
Quantum computation and quantum information has taught us to think physically about computation (…) we can also learn to think computationally about physics.
Can we describe and understand Nature in a fundamental level as a concurrent system?
Perhaps the best phenomenon to examine the idea of concurrency is quantum entanglement. Correlations of physical properties between two entangled particles, as so far tested experimentally, cannot be explained under local realism. A reinterpretation of what fundamentally underlies a quantum state, for instance, an inherently concurrent process, needs work.