Wednesday, July 25, 2007

Summertime

We should perhaps update this blog more often but we find almost no time to do that! Alain is very busy editing, correcting, and finalizing his book, Noncommutative Geometry, Quantum Fields and Motives, with Matilde Marcolli. This is no easy task given the size and scope of the work. BTW, I hope sometime soon we will have a good and extensive review of this book in this blog. I am also busy traveling, going to conferences and giving talks.

Last week I attended the Max Planck Institute conference on Hochschild and cyclic cohomology. The meeting was on Hopf cyclic cohomology and higher homotopy structures on Hochschild and cyclic complexes (the so called `stringy topology'). Somehow both topics are very hot these days and I shall report on some of the talks there later. This is the second in a series of three conferences this summer, all co-organized by Matilde, at MPI devoted to noncommutative geometry and its applications.

Another summer conference is a meeting in Warsaw in honor of Paul Baum's 70 th birthday. Happy birthday Paul!

Tuesday, July 17, 2007

Non Standard stuff

I am not sure I really know how to make use of a "blog" like this one. Recently I had to write a sollicited paper describing the perspective on the structure of space-time obtained from the point of view of noncommutative geometry. At first I thought that I could just be lazy and after the paper was written (it is available here) just use pieces of it to keep this blog alive during the summer vacations. However, when trying to do that, I realized that it was better (partly because of the impractical use of latex in the blog) to first make the paper available and then tell in the blog the additional things one would not "normally" write in a paper (even a non-technical general public paper such as the above). I am not keen on turning the blog into a place for controversies since it is unclear to me that one gains a lot in such discussions. The rule seems to be that, most often, people have prejudices against new stuff mostly because they dont know enough and take the lazy attitude that it is easier to denigrate a theory than to try and appreciate it. I am no exception and have certainly adopted that attitude with respect to supersymmetry or string theory. A debate will usually exhibit the strong opinions of the various sides and it is rare that one witnesses a real change taking place. So much for the "controversy" side. However I do believe that there are some points that can be quite useful to know and which, provided they are presented in a non-polemic manner can help a lot to avoid some pitfalls. I will discuss as an example the two notions of "infinitesimals" that I know and try to explain the relevance of both. This is not a "math paper" but rather an informal discussion.
When I was a student in Ecole Normale about 40 years ago, I fell in love with a new math topic called "nonstandard analysis" which was advocated by A. Robinson. Being a student of Gustave Choquet at that time, I knew a lot about ultrafilters. These maximal filters were (correct me if I am wrong) discovered by H. Cartan during a Bourbaki workshop. At that time Cartan had no name for the new objects but he had found the remarkable efficiency they had in any proof where a compactness and choice arguments were needed. So (this I heard from Cartan) the name he was using was "boum" !!! Of course he knew that it gave a one line proof of the existence of Haar measure (boum...). And also that because of uniqueness of the latter it was in fact proving a rather strong convergence statement on the counting functions that approximate the Haar measure. He wanted to make sure, and wrote in a Compte-Rendu note the full details of a direct geometric argument proving the expected convergence. From ultrafilters to ultraproducts is an easy step. And I got completely bought by ultraproducts when I learnt (around that time) about the Ax-Cochen theorem: the ultraproduct of p-adic fields is isomorphic to the ultraproduct of local function fields with the same residue fields. Thus I started trying to work in that subject and obtained, using a specific class of ultrafilters called "selective", a construction of minimal models in nonstandard analysis. They are obtained as ultraproducts but the ultrafilters used are so special that, for instance, in order to know the element of the ultrapower of a set X, one does not need to care about the labels: the image ultrafilter in X is all that is needed. I wrote a paper explaining how to use ultraproducts and always kept that tool ready for use later on. I used it in an essential manner in my work on the classification of factors. So much for the positive side of the coin. However, quite early on I had tried in vain to implement one of the "selling adds" of nonstandard analysis, namely that it was finally giving the promised land for "infinitesimals". In fact the adds came with a specific example: a purported answer to the naive question "what is the probability "p" that a dart will land at a given point x of the target" in playing a game of darts. This was followed by 1) the simple argument why that positive number "p" was smaller than epsilon for any positive real epsilon 2) one hundred pages of logic 3) the identification of "p" with a "non-standard" number...
At first I attributed my inability to concretely get "p" to my lack of knowledge in logics, but after realizing that the models could be constructed as ultraproducts this excuse no longer applied. At this point I realized that there is some fundamental reason why one will never be able to actually "pin down" this "p" among non-standard numbers: from a non-standard number (non-trivial of course) one canonically deduces a non-measurable character of the infinte product of two element groups (the argument is simpler using a non-standard infinite integer "n", just take the map which to the sequence a_n (of 0 and 1) assigns its value for the index "n"). Now a character of a compact group is either continuous or non-measurable. Thus a non-standard number gives us canonically a non-measurable subset of [0,1]. This is the end of the rope for being "explicit" since (from another side of logics) one knows that it is just impossible to construct explicitely a non-measurable subset of [0,1]!
It took me many years to find a good answer to the above naive question about "p". The answer is explained in details here. It is given by the formalism of quantum mechanics, which as explained in the previous post on "infinitesimal variables" gives a framework where continuous variables can coexist with infinitesimal ones, at the only price of having more subtle algebraic rules where commutativity no longer holds. The new infinistesimals have an "order" (an infinitesimal of order one is a compact operator whose characteristic values \mu_n are a big O of 1/n). The novel point is that they have an integral, which in physics terms is given by the coefficient of the logarithmic divergence of the trace. Thus one obtains a new stage for the "calculus" and it is at the core of noncommutative differential geometry.




In Riemannian geometry the natural datum is the square of the line element, so that when computing the distance d(A,B) between two points one has to minimize the integral from A to B along a continuous path of the square root of g_\mu\nu dx\mu dx\nu. Now it is often true that "taking a square root" in a brutal manner as in the above equation is hiding a deeper level of understanding. In fact this issue of taking the square root led Dirac to his famous analogue of the Schrodinger equation for the electron and the theoretical discovery of the positron. Dirac was looking for a relativistic invariant form of the Schrodinger equation. One basic property of that equation is that it is of first order in the time variable. The Klein-Gordon equation which is the relativistic form of the Laplace equation, is relativistic invariant but is of second order in time. Dirac found away to take the square root of the Klein-Gordon operator using Clifford algebra. In fact (as pointed out to me by Atiyah) Hamilton had already written the magic combination of partial derivatives using his quaternions as coefficients and noted that this gave a square root of the Laplacian. When I was in St. Petersburg for Euler's 300'th, I noticed that Euler could share the credit for quaternions since he had explicitly written their multiplication rule in order to show that the product of two sums of 4 squares is a sum of 4 squares.
So what is the relation between Dirac's square root of the Laplacian and the above issue of taking the square root in the formula for the distance d(A,B). The point is that one can use Dirac's solution and rewrite the same geodesic distance d(A,B) in the following manner: one no longer measures the minimal length of a continuous path but one measures the maximal variation of a function: ie the absolute value of the difference f(A)-f(B). Of course without a restriction on f this would give infinity, but one requires that the commutator [D,f] of f with the Dirac operator is bounded by one. Here we are in our "quantized calculus" stage, so that both the functions on our geometric space as well as the Dirac operator are all concretely represented in the same Hilbert space H. H is the Hilbert space of square integrable spinors and the functions act by pointwise multiplication. The commutator [D,f] is the Clifford mulltiplication by the gradient of f so that when the function f is real, its norm is just the sup norm of the gradient. Then saying that the norm of [D,f] is less than one is the same as asking that f be a Lipschitz function of constant one ie that the absolute value of f(A)-f(B) is less than d(A,B) where the latter is the geodesic distance. For complex valued functions one only gets an inequality, but it suffices to show that the maximum variation of such f gives exactly the geodesic distance: ie we recover the geodesic distance d(A,B) as Sup f(A)-f(B) for norm of [D,f] less than one.
Note that D has the dimension of the inverse of a length, ie of a mass. In fact in the above formula for distances in terms of a supremum the product of "f" by D is dimensionless and "f" has the dimension of a length since f(A) - f(B) is a distance.
Now what is the intuitive meaning of D? Note that the above formula measuring the distance d(A,B) as a supremum is based on the lack of commutativity between D and the coordinates "f" on our space. Thus there should be a tension that prevents D from commuting with the coordinates. This tension is provided by the following key hypothesis "the inverse of D is an infinitesimal".
Indeed we saw in a previous post that variables with continuous range cannot commute with infinitesimals, which gives the needed tension. But there is more, because of the fundamental equation ds = 1/D which gives to the inverse of D the heuristic meaning of the line element. This change of paradigm from the g_\mu\nu to this operator theoretic ds is the exact parallel of the change of the unit of length in the metric system to a spectral paradigm.
Thus one can think of a geometry as a concrete Hilbert space representation not only of the algebra of coordinates on the space X we are interested in, but also of its infinitesimal line element ds. In the usual Riemannian case this representation is moreover irreducible. Thus in many ways this is analogous to thinking of a particle as Wigner taught us, ie as an irreducible representation (of the Poincaré group).

Tuesday, July 10, 2007

A brief history of the metric system






The next step is to understand what is the replacement of the Riemannian paradigm for noncommutative spaces. To prepare for that, and using the excuse of the summer holidays, let me first tell the story of the change of paradigm that already took place in the metric system with the replacement of the concrete "mètre étalon" by a spectral unit of measurement.

The notion of geometry is intimately tied up with the measurement of length. In the real world such measurement depends on the chosen system of units and the story of the most commonly used system--the metric system--illustrates the difficulties attached to reaching some agreement on a physical unit of length which would unify the previous numerous existing choices. As is well known, the United States are one of the few countries that are not using the metric system and this lack of uniformity in the choice of a unit of length became painfully obvious when it entailed the loss of a probe worth 125 million dollars just because two different teams of engineers had used the two different units (the foot and the metric system).

In 1791 the French Academy of Sciences agreed on the definition of the unit of length in the metric system, the "mètre", as being the ten millionth part of the quarter of the meridian of the earth. The idea was to measure the length of the arc of the meridian from Barcelone to Dunkerque while the corresponding angle (approximately 9.5°) was determined using the measurement of latitude from reference stars. In a way this was just a refinement of what Eratosthenes had done in Egypt, 250 years BC, to measure the size of the earth (with a precision of 0.4 %).

Thus in 1792 two expeditions were sent to measure this arc of the meridian, one for the Northern portion was led by Delambre and the other for the southern portion was led by Mechain. Both of them were astronomers who were using a new instrument for measuring angles, invented by Borda, a French physicist. The method they used is the method of triangulation and of concrete measurement of the "base" of one triangle. It took them a long time to perform their measurements and it was a risky enterprize. At the beginning of the revolution, France entered in a war with Spain. Just try to imagine how difficult it is to explain that you are trying to define a universal unit of length when you are arrested at the top of a mountain with very precise optical instruments allowing you to follow all the movements of the troops in the surrounding.
Both Delambre and Mechain were trying to reach the utmost precision in their measurements and an important part of the delay came from the fact that this reached an obsessive level in the case of Mechain. In fact when he measured the latitude of Barcelone he did it from two different close by locations, but found contradictory results which were discordant by 3.5 seconds of arc. Pressed to give his result he chose to hide this discrepancy just to "save the face" which is the wrong attitude for a Scientist. Chased from Spain by the war with France he had no second chance to understand the origin of the discrepancy and had to fiddle a little bit with his results to present them to the International Commission which met in Paris in 1799 to collect the results of Delambre and Mechain and compute the "mètre" from them. Since he was an honest man obsessed by precision, the above discrepancy kept haunting him and he obtained from the Academy to lead another expedition a few years later to triangulate further into Spain. He went and died from malaria in Valencia. After his death, his notebooks were analysed by Delambre who found the discrepancy in the measurements of the latitude of Barcelone but could not explain it. The explanation was found 25 years after the death of Mechain by a young astronomer by the name of Nicollet, who was a student of Laplace. Mechain had done in both of the sites he had chosen in Barcelone (Mont Jouy and Fontana del Oro) a number of measurements of latitude using several reference stars. Then he had simply taken the average of his measurements in each place. Mechain knew very well that refraction distorts the path of light rays which creates an uncertainty when you use reference stars that are close to the horizon. But he considered that the average result would wipe out this problem. What Nicollet did was to ponder the average to eliminate the uncertainty created by refraction and, using the measurements of Mechain, he obtained a remarkable agreement (0.4 seconds ie a few meters) between the latitudes measured from Mont Jouy and Fontana del Oro. In other words Mechain had made no mistake in his measurements and could have understood by pure thought what was wrong in his computation. I recommend the book of Ken Adler for a nice account of the full story of the two expeditions.
In any case in the meantime the International commission had taken the results from the two expeditions and computed the length of the ten millionth part of the quarter of the meridian using them. Moreover a concrete platinum bar with approximately that length was then realized and was taken as the definition of the unit of length in the metric system. With this unit the actual length of the quarter of meridian turns out to be 10 002 290 rather than the aimed for 10 000 000 but this is no longer relevant.
In fact in 1889 the reference became another specific metal bar (of platinum and iridium) called "mètre étalon", which was deposited near Paris in the pavillon de Breteuil. This definition held until 1960.

Already in 1927, at the seventh conference on the metric system, in order to take into account the inevitable natural variations of the concrete called "mètre étalon", the idea emerged to compare it with a reference wave length (the red line of Cadmium).
Around 1960 the reference to the called "mètre étalon" was finally abandoned and a new definition of the unit of length in the metric system (the "mètre) was adopted as 1650763.73 times the wave length of the radiation corresponding to the transition between the levels 2p10 and 5d5 of the Krypton 86Kr.
In 1967 the second was defined as the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of Caesium-133. Finally in 1983 the "mètre" was defined as the distance traveled by light in 1/299792458 second. In fact the speed of light is just a conversion factor and to define the "mètre" one gives it the specific value of c= 299792458 m/s. In other words the "mètre" is defined as a certain fraction 9192631770/299792458~ 30.6633... of the wave length of the radiation coming from the transition between the above hyperfine levels of the Caesium atom.

The advantages of the new standard of length are many. First by not being tied up with any specific location, it is in fact available anywhere where Caesium is. The choice of Caesium as opposed to Helium or Hydrogen which are much more common in the universe is of course still debatable, and it is quite possible that a new standard will soon be adopted involving spectral lines of Hydrogen instead of Caesium. See this paper of Bordé for an update.

While it would be difficult to communicate our standard of length with other extra terrestrial civilizations if they had to make measurements of the earth (such as its size) the spectral definition can easily be encoded in a probe and sent out. In fact spectral patterns provide a perfect "signature" of chemicals, and a universal information available anywhere where these chemicals can be found, so that the wave length of a specific line is a perfectly acceptable unit, while if you start thinking a bit you will find out that we would be unable to just tell where the earth is in the universe... Coordinates ? yes but whith respect to which system? One possibility would be to give the sequence of redshifts to nearby galaxies, and in a more refined manner to nearby stars but it would be quite difficult to be sure that this would single out a definite place.





Tuesday, July 3, 2007

Noncommutative spacetime


As I explained in a previous post, it is only because one drops commutativity that, in the calculus, variables with continuous range can coexist with variables with countable range. In the classical formulation of variables, as maps from a set X to the real numbers, we saw above that discrete variables cannot coexist with continuous variables.
The uniqueness of the separable infinite dimensional Hilbert space cures that problem, and variables with continuous range coexist happily with variables with countable range, such as the infinitesimal ones. The only new fact is that they do not commute.

One way to understand the transition from the commutative to the noncommutative is that in the latter case one needs to care about the ordering of the letters when one is writing.
As an example, use the "commutative rule" to simplify the following cryptic message I received from a friend :"Je suis alençonnais, et non alsacien. Si t'as besoin d'un conseil nana, je t'attends au coin annales. Qui suis-je?"
It is Heisenberg who discovered that such care was needed when dealing with the coordinates on the phase space of microscopic systems.
At the philosophical level there is something quite satisfactory in the variability of the quantum mechanical observables. Usually when pressed to explain what is the cause of the variability in the external world, the answer that comes naturally to the mind is just: the passing of time. But precisely the quantum world provides a more subtle answer since the reduction of the wave packet which happens in any quantum measurement is nothing else but the replacement of a "q-number" by an actual number which is chosen among the elements in its spectrum. Thus there is an intrinsic variability in the quantum world which is so far not reducible to anything classical. The results of observations are intrinsically variable quantities, and this to the point that their values cannot be reproduced from one experiment to the next, but which, when taken altogether, form a q-number.

Heisenberg's discovery shows that the phase-space of microscopic systems is noncommutative inasmuch as the coordinates on that space no longer satisfy the commutative rule of ordinary algebra. This example of the phase space can be regarded as the historic origin of noncommutative geometry. But what about spacetime itself ? We now show why it is a natural step to pass from a commutative spacetime to a noncommutative one.
The full action of gravity coupled with matter admits a huge natural group of symmetries. The group of invariance for the Einstein-Hilbert action is the group of diffeomorphisms of the manifold and the invariance of the action is simply the manifestation of its geometric nature. A diffeomorphism acts by permutations of the points so that points have no absolute meaning.
The full group of invariance of the action of gravity coupled with matter is however richer than the group of diffeomorphisms of the manifold since one needs to include something called ``the group of gauge transformations" which physicists have identified as the symmetry of the matter part.
This is defined as the group of maps from the manifold to some fixed other group, G, called the `gauge group', which as far as we known is: G=U(1).SU(2).SU(3). The group of diffeomorphisms acts on the group of gauge transformations by permutations of the points of the manifold and the full group of symmetries of the action is the semi-direct product of the two groups (in the same way, the Poincaré group which is the invariance group of special relativity, is the semi-direct product of the group of translations by the group of Lorentz transformations). In particular it is not a simple group (a simple group is one which cannot be decomposed into smaller pieces, a bit like a prime number cannot be factorized into a product of smaller numbers) but is a ``composite" and contains a huge normal subgroup.
Now that we know the invariance group of the action, it is natural to try and find a space X whose group of diffeomorphisms is simply that group, so that we could hope to interpret the full action as pure gravity on X. This is the old Kaluza-Klein idea. Unfortunately this search is bound to fail if one looks for an ordinary manifold since by a mathematical result, the connected component of the identity in the group of diffeomorphisms is always a simple group, excluding a semi-direct product structure as that of the above invariance group of the full action of gravity coupled with matter.
But noncommutative spaces of the simplest kind readily give the answer, modulo a few subtle points. To understand what happens note that for ordinary manifolds the algebraic object corresponding to a diffeomorphism is just an automorphism of the algebra of coordinates i.e. a transformation of the coordinates that does not destroy their algebraic relations. When an involutive algebra A is not commutative there is an easy way to construct automorphisms.
One takes a unitary element u of the algebra i.e. such that u u*=u*u=1. Using u one obtains an automorphism called inner, by the formula x -> uxu*.
Note that in the commutative case this formula just gives the identity automorphism (since one could then permute x and u*). Thus this construction is interesting only in the noncommutative case. Moreover the inner automorphisms form a subgroup denoted Int(A) which is always a normal subgroup of the group of automorphisms of A.
In the simplest example, where we take for A the algebra of smooth maps from a manifold M to the algebra of matrices of complex numbers, one shows that the group Int(A) in that case is (locally) isomorphic to the group of gauge transformations i.e. of smooth maps from M to the gauge group G= PSU(n) (quotient of SU(n) by its center). Moreover the relation between inner automorphisms and all automorphisms becomes identical to the exact sequence governing the structure of the above invariance group of the full action of gravity coupled with matter.

It is quite striking that the terminology coming from physics: internal symmetries agrees so well with the mathematical one of inner automorphisms. In the general case only automorphisms that are unitarily implemented in Hilbert space will be relevant but modulo this subtlety one can see at once from the above example the advantage of treating noncommutative spaces on the same footing as the ordinary ones. The next step is to properly define the notion of metric for such spaces and we shall indulge, in the next post, in a short historical description of the evolution of the definition of the ``unit of length" in physics. This will prepare the ground for the introduction to the spectral paradigm of noncommutative geometry.