Sunday, April 23, 2017

Politics and a bit on the symmetry properties of the commutator and the Jordan products


This week I am thorn between a physics and politics. On one hand I have the scheduled physics topics to talk about, and on the other hand there are very juicy political topics. So let me start with some political commentary which I will attempt to keep at the minimum.

If you do not live in US, it is hard to understand the amount of political pressure on science which comes from the right. GOP is at war with science because of three factors:  the political elites are corrupt and depend on lobby money from corporations who make more money when they wreck the environment, the religious right is at war with evolution, and lastly the inbred rednecks swallow hook, line, and sinker the toxic sludge of propaganda of Fox News in the name of "freedom". 

So it was a breath of fresh air the recent march for science in which people sick and tired of the GOP war on science took a stand for the facts that 2+2 is still 4, humans cause global warming, and Earth is older than 5000 years. And then I opened my email and I see an alert about a new post by Lubos Motl defending Bill O'Reilly. I normally delete those notifications and I don't really know how I am subscribed to them because I only get about one a week - it is strangely inconsistent. So I said to myself: how fitting. The (naked) emperor of physics who wanted once to reclassify an archive paper to the general physics section after it was published in PRL, the climate change denier and the open apologist of the murderer Putin, con-man Trump, and white trash Sarah Palin throws his support behind the another toxic sorry propagandist like himself. Would have been too much to expect him to defend science instead? Out of curiosity I followed the link to see the pro O'Reilly rant, and I was not disappointed: it was choke full of imbecilic nonsense as I expected. But then I saw the icing on the cake: I see in the history list that Lubos did write a rant against the march for science too calling it misguided and unethical. Wow! Now in France (like in US or UK) there is no shortage of stupidity which just propelled Marine LePen into the final for presidency. The global village idiots will flock to her side and I have no doubt Lubos will support her too.

OK, the political topics took too much and I want to continue with the series topic on quantum mechanics reconstruction. Let me just say what the products \(\alpha\) and \(\sigma\) will turn out to be. In the classical mechanics case it can be constructively proven that \(\alpha\) is the Poisson bracket while in the quantum case, \(\alpha\) is the commutator. The other product \(\sigma\) is the regular function multiplication in classical mechanics and the Jordan product (the anti-commutator) in quantum mechanics. 

Now the Poisson bracket and the commutators are anti-symmetric: \(f\alpha g = - g\alpha f\), and the regular function multiplication and the Jordan product are symmetric products: \(f\sigma g = g\sigma f\). The symmetry properties are preserved under system composition as we can see from the fundamental relationships:

\(\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha \)
\(\Delta (\sigma) = \sigma \otimes \sigma - \alpha \otimes \alpha\)

because S*S = S, S*A = A, A*A = S

Incidentally, this observation opens up another way into quantum mechanics reconstruction (from the operational point of view) but I will not talk about it in this series. Instead next time I will show how to prove the fact that the product \(\alpha\) is anti-symmetric. Again Leibniz identity will come to the rescue. Then using the fundamental relationship we must have that the product \(\sigma\) is symmetric.  Eventually all their mathematical properties will be obtained. Please stay tuned. 

Sunday, April 16, 2017

The fundamental bipartite relations


Continuing from where we left off last time, we introduced the most general composite products for a bipartite system:

\(\alpha_{12} = a_{11}\alpha \otimes \alpha + a_{12} \alpha\otimes\sigma + a_{21} \sigma\otimes \alpha + a_{22} \sigma\otimes\sigma\)
\(\sigma_{12} = b_{11}\alpha \otimes \alpha + b_{12} \alpha\otimes\sigma + b_{21} \sigma\otimes \alpha + b_{22} \sigma\otimes\sigma\)

The question now becomes: are the \(a\)'s and \(b\)'s parameters free, or can we say something abut them? To start let's normalize the products \(\sigma\) like this:

\(f\sigma I = I\sigma f = f\)

which can always be done. Now in:

\((f_1 \otimes f_2)\alpha_{12}(g_1\otimes g_2) = \)
\(=a_{11}(f_1 \alpha g_1)\otimes  (f_2 \alpha g_2) + a_{12}(f_1 \alpha g_1) \otimes (f_2 \sigma g_2 ) +\)
\(+a_{21}(f_1 \sigma g_1)\otimes  (f_2 \alpha g_2) + a_{22}(f_1 \sigma g_1) \otimes (f_2 \sigma g_2 )\)

if we pick \(f_1 = g_1 = I\) :

\((I \otimes f_2)\alpha_{12}(I\otimes g_2) = \)
\(=a_{11}(I \alpha I)\otimes  (f_2 \alpha g_2) + a_{12}(I \alpha I) \otimes (f_2 \sigma g_2 ) +\)
\(+a_{21}(I \sigma I)\otimes  (f_2 \alpha g_2) + a_{22}(I \sigma I) \otimes (f_2 \sigma g_2 )\)

and recalling from last time that \(I\alpha I = 0\) from Leibniz identity we get:

\(f_2 \alpha g_2 = a_{21} (f_2 \alpha g_2 ) + a_{22} (f_2 \sigma g_2)\)

which demands \(a_{21} = 1\) and \(a_{22} = 0\).

If we make the same substitution into:

 \((f_1 \otimes f_2)\sigma_{12}(g_1\otimes g_2) = \)
\(=b_{11}(f_1 \alpha g_1)\otimes  (f_2 \alpha g_2) + b_{12}(f_1 \alpha g_1) \otimes (f_2 \sigma g_2 ) +\)
\(+b_{21}(f_1 \sigma g_1)\otimes  (f_2 \alpha g_2) + b_{22}(f_1 \sigma g_1) \otimes (f_2 \sigma g_2 )\)

we get:

\(f_2 \sigma g_2 = b_{21} (f_2 \alpha g_2 ) + b_{22} (f_2 \sigma g_2)\)

which demands \(b_{21} = 0\) and \(b_{22} = 1\)

We can play the same game with \(f_2 = g_2 = I\) and (skipping the trivial details) we get two additional conditions: \(a_{12} = 1\) and \(b_{12} = 0\).

In coproduct notation what we get so far is:

\(\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha + a_{11} \alpha \otimes \alpha\)
\(\Delta (\sigma) = \sigma \otimes \sigma + b_{11} \alpha \otimes \alpha\)

By applying Leibniz identity on a bipartite system, one can show after some tedious computations that \(a_{11} = 0\). The only remaining free parameters is \(b_{11}\) which can be normalized to be ether -1, 0, or 1 (or elliptic, parabolic, and hyperbolic). Each choice corresponds to a potential theory of nature. For example 0 corresponds to classical mechanics, and -1 to quantum mechanics.

Elliptic composability is quantum mechanics! The bipartite products obey:


\(\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha \)
\(\Delta (\sigma) = \sigma \otimes \sigma - \alpha \otimes \alpha\)

Please notice the similarity with complex number multiplication. This is why complex numbers play a central role in quantum mechanics.

Now at the moment the two products do not respect any other properties. But we can continue this line of argument and prove their symmetry/anti-symmetry. And from there we can derive their complete properties arriving constructively at the standard formulation of quantum mechanics. Please stay tuned.

Sunday, April 9, 2017

Time evolution for a composite system


Continuing where we left off last time, let me first point out one thing which I glossed over too fast: the representation of \(D\) as a product \(\alpha\): \(Dg = f\alpha g\). This is highly nontrivial and not all time evolutions respect it. In fact, the statement above is nothing but a reformulation of Noether's theorem in the Hamiltonian formalism. I did not build up the proper mathematical machinery to easily show this, so take my word on it for now. I might revisit this at a later time.

Now what I want to do is explore what happens to the product \(\alpha\) when we consider two physical systems 1 and 2. First, let's introduce the unit element of our category, and let's call it "I":

\(f\otimes I = I\otimes f = f\)

for all \(f \in C\)

Then we have \((f_1\otimes I) \alpha_{12} (g_1\otimes I) = f \alpha g\)

On the other hand suppose in nature there exists only the product \(\alpha\). Then the only way we can construct a composite product \(\alpha_{12}\) out of \(\alpha_1\) and \(\alpha_2\) is:

\((f_1\otimes f_2) \alpha_{12} (g_1 \otimes g_2) = a(f_1 \alpha_1 g_1)\otimes (f_2\alpha_2 g_2)\)

where \(a\) is a constant. 

Now if we pick \(f_2 = g_2 = I\) we get:

\((f_1\otimes I) \alpha_{12} (g_1 \otimes I) = a(f_1 \alpha_1 g_1)\otimes (I \alpha_2 I)  \)
which is the same as \(f \alpha g\) by above. 

But what is \(I\alpha I\)? Here we use the Leibniz identity and prove it is equal with zero:

\(I \alpha (I\alpha A) = (I \alpha I) \alpha A + I \alpha (I \alpha A)\)

for all \(A\) and hence \(I\alpha I = 0\)

But this means that a single product alpha by itself is not enough! Therefore we need a second product \(\sigma\)! Alpha will turn out to be the commutator, and sigma the Jordan product of observables, but we will derive this in a constructive fashion.

Now that we have two products in our theory of nature, let's see how can we build the composite products out of individual systems. Basically we try all possible combinations:

\(\alpha_{12} = a_{11}\alpha \otimes \alpha + a_{12} \alpha\otimes\sigma + a_{21} \sigma\otimes \alpha + a_{22} \sigma\otimes\sigma\)
\(\sigma_{12} = b_{11}\alpha \otimes \alpha + b_{12} \alpha\otimes\sigma + b_{21} \sigma\otimes \alpha + b_{22} \sigma\otimes\sigma\)

which is shorthand for (I am spelling out only the first case):

\((f_1 \otimes f_2)\alpha_{12}(g_1\otimes g_2) = \)
\(=a_{11}(f_1 \alpha g_1)\otimes  (f_2 \alpha g_2) + a_{12}(f_1 \alpha g_1) \otimes (f_2 \sigma g_2 ) +\)
\(+a_{21}(f_1 \sigma g_1)\otimes  (f_2 \alpha g_2) + a_{22}(f_1 \sigma g_1) \otimes (f_2 \sigma g_2 )\)

For the mathematically inclined reader we have constructed what it is called a coalgebra where the operation is called a coproduct: \(\Delta : C \rightarrow C\otimes C\). In category theory a coproduct is obtained from a product by reversing the arrows.

Now the task is to see if we can say something about the coproduct parameters: \(a_{11},..., b_{22}\). In general nothing can constrain their values, but in our case we do have an additional relation: Leibniz identity which arises out the functoriality of time evolution. This will be enough to fully determine the products \(\alpha\) and \(\sigma\), and from them the formalism of quantum mechanics. Please stay tuned.

Sunday, March 26, 2017

Time as a continous functor


To recall from prior posts, a functor maps objects to objects and arrows to arrows between two categories. In other words, it is structure preserving. In the case of a monoidal category, suppose there is an arrow * from \(C\times C \rightarrow C\). Then a functor T makes the diagram below commute:


This is all fancy abstract math which has a simple physical interpretation when T corresponds to time evolution: the laws of physics do not change in time. Moreover it can be shown with a bit of effort and knowledge of C* algebras that Time as a functor = unitarity.

But what can we derive from the commutative diagram above? With the additional help of two more very simple and natural ingredients we will be able to reconstruct the complete formalism of quantum mechanics!!! Today I will introduce the first one: time is a continuous parameter. Just like in group theory adding continuity results in the theory of Lie groups we will consider continous functors and we will investigate what happens in the neighborhood of the identity element.

In the limit of time evolution going to zero T becomes the identity. For infinitesimal time evolution we can then write:

\(T = I + \epsilon D\)

We plug this back into the diagram commutativity condition \(T(A)*T(B) = T(A*B)\) and we obtain in first order the chain rule of differentiation:

\(D(A*B) = D(A)*B + A*D(B)\)

There is not a single kind of time evolution and \(D\) is not unique (think of various hamiltonians). There is a natural transformation between different time evolution functors  and we can express D as an operation like this: \(D_A = A\alpha\) where \((\cdot \alpha \cdot)\) is a product.

\(\alpha : C\times C \rightarrow C\)

Then we obtain the Leibniz identity:

\(A\alpha (B * C) = (A\alpha B) * C + B * (A \alpha C)\)

This is extremely powerful, as it is unitarity in disguise.  Next time we'll use the tensor product and the second ingredient to obtain many more mathematical consequences. Please stay tuned.

Sunday, March 19, 2017

Monoidal categories and the tensor product



Last time we discussed the category theory product which forms another category from two categories. Suppose now that we start with one category \(C\) and form the product with itself \(C\times C\). It is natural to see if there is a functor from \(C\times C\) to \(C\). If such a functor exists and moreover it respects associativity and unit elements, then the category \(C\) is called a monoidal category. By abuse of notation, the functor above is called the tensor product, but this is not the usual tensor product of vector space. The tensor product of vector space is only one concrete example of a monoidal product. To get to the ordinary tensor product we need to inject physics into the problem. 

The category \(C\) we are interested in is that of physical systems where the objects are physical systems, and arrows are compositions of physical systems. The key physical concepts needed are that of time and dynamical degree of freedom inside Hamiltonian formalism.

Time plays an distinguished role in quantum mechanics both in terms of formalism (remember that there is no time operator) and in how quantum mechanics can be reconstructed. 

The space in Hamiltonian formalism is a Poisson manifold which is not necessarily a vector space but because the Hilbert space \(L^2 (R^3\times R^3)\) is isomorphic to \(L^2 (R^3 ) \otimes L^2 (R^3 )\) let's discuss monoidal categories for vector spaces obeying an equivalence relationship. Hilbert spaces form a category of their own and there is a functor mapping physical systems into Hilbert spaces. This is usually presented as the first quantum mechanics postulate: each physical system is associated with a complex Hilbert space H.

For complete generality of the definition of the tensor product we consider two distinct vector space V and W for which we first consider the category theory product (in this case the Cartesian product) but for which we make the following identifications:
  • \((v_1, w)+(v_2, w) = (v_1 + v_2, w)\)
  • \((v, w_1)+(v, w_2) = (v, w_1 + w_2)\)
  • \(c(v,w) = (cv, w) = (v, cw)\)
For physical justification think of V and W as one dimensional vector spaces corresponding to distinct dynamical degrees of freedom. Linearity is a property of vector spaces and we expect this property to be preserved if vector spaces are to describe nature. Bilinearity in the equivalence relationship above arises because the degrees of freedom are independent.

Now a Cartesian product of vector spaces respecting the above relationships is a new mathematical object: a tensor product.

The tensor product is unique up to isomorphism and respects the following universal property:

There is a bilinear map \(\phi : V\times W \rightarrow V\otimes W\) such that given any other vector space Z and a bilinear map \(h: V\times W \rightarrow Z\) there is a unique linear map \(h^{'}: V\otimes W \rightarrow Z\) such that the diagram below commutes.


This universal property is very strong and several mathematical facts follows from it: the tensor product is unique up to isomorphism (instead of Z consider another tensor product \(V\otimes^{'}W\) ), the tensor product is associative, and there is a natural isomorphism between  \(V\otimes W\) and \(W\otimes V\) making the tensor product an example of a symmetric monoidal category, just like the category of physical systems under composition.

This may look like an insignificant trivial observation, but it is extremely powerful and it is the starting point of quantum mechanics reconstruction. On one hand we have composition of physical systems and theories of nature describing physical systems. On the other hand we have dynamical degrees of freedom and the rules of quantum mechanics. The two things are actually identical and each one can be derived from the other. To do this we need one additional ingredient: time viewed as a functor. Please stay tuned.

Monday, March 13, 2017

Category Theory Product


Before we discuss this week's topic, I want to make two remarks from the prior posts content. First, why we need natural transformations in algebraic topology? Associating groups to topological spaces (which incidentally describe the hole structure of the space) is done by the use of functors. Different (co)homology theories are basically different functors, and their equivalence is the same as proving the existence of a natural transformation. Second, the logic used in category theory is intuitionistic logic where truth is proved constructively. Since this is mapped into computer science by the Curry-Howard isomorphism, the fact that some statements have no constructive proof is equivalent with a computation running forever. In computation theory one encounters the halting problem. If the halting problem were decidable then category theory would have been mapped to ordinary logic instead of intuitionistic logic.

Now back to the topic of the day. We are still in pure math domain and we are looking at mathematical objects from 10,000 feet disregarding their nature and observing only their existence and their relationships (objects and arrows). The first question one asks is how to construct new categories from existing ones? One way is to simply reverse the direction of all arrows and the resulting category is unsurprisingly called the opposite category (or the dual). Another way is to combine two category into a new one. Enter the concept of a product of two categories: \(\mathbf{C}\times \mathbf{D}\). In set theory this would correspond with to the Cartesian product of two sets. However we need to give a definition which is independent of the nature of the elements. Moreover we want to give it in a way which guarantees uniqueness up to isomorphism. 

The basic idea is that of a projection from the elements of \(\mathbf{C}\times \mathbf{D}\) back to the elements of \(\mathbf{C}\) and \(\mathbf{D}\). So how do we know that those projections and the product is unique up to isomorphism? Suppose that there is another category \(\mathbf{Y}\) with maps \(f_C\) and \(f_D\). Then there is a unique map \(f\) such that the diagram below commutes





This diagram has to commute for all categories \(\mathbf{Y}\) and their maps \(f_C\) and \(f_D\). From this definition, can you prove uniqueness of the product up to isomorphism? It is a simple matter of "diagram reasoning". Just pretend that Y is now the "true incarnation" of the product. You need to find a morphisms f from Y to CxD and a morphism g from CxD to Y such that \(g\circ f =1_{C\times D}\), \(g\circ f = 1_Y\). See? Category theory is really easy and not harder than linear algebra.

Now what happens if we flip all arrows in the diagram above? We obtain a coproduct category \(\mathbf{C}\oplus \mathbf{D}\) and the projections maps become injection maps. 

OK, time for concrete examples:

  • sets: product = Cartesian product, coproduct = disjoint union
  • partial order sets: product = greatest lower bounds (meets), coproduct = least upper bounds (joins)
So where are we now? The concept of the product is very simple, but we need it as a stepping stone to the concept of tensor product and (symmetric) monoidal category. Why? Because physical systems form a symmetric monoidal category. Using categorical arguments we can derive the complete mathematical properties of any theory of nature describing a symmetric monoidal category. And the answer will turn out to be: quantum mechanics. Please stay tuned.

Saturday, March 4, 2017

The Curry–Howard isomorphism


Category theory may seem vary abstract and intimidating, but in fact it is extremely easy to understand. In category theory we look at concrete objects from far away without any regard for the internal structure. This is similar with Bohr's position on physics: physics is about what we can say about nature, and not decide what nature is. Surprisingly, a lot of information about the objects in category theory is derivable from the behavior of the objects and this is where I am ultimately heading with this series on category theory.

Last time I mentioned the origin of category theory as the formalism to clarify when two homology theories are equivalent. But category theory can be started from two other directions as well, and those alternative viewpoints help provide the intuition needed to navigate the abstractions of category theory. One thread of discussion starts with the idea of computability and the work of Alonzo Church and Alan Turing. Turing was Church's student and each started an essential line of research: lambda calculus and universal Turing machines.  Those later grew into one hand functional languages like Java script, and the other hand into object oriented languages like C++. What one can do with lambda calculus can be achieved with universal Turing machines, and the other way around. The essential idea of computer programming is to build complex structures out of simpler building blocks. Object oriented programming starts from the idea of packaging together actions and states. An object is a "black box" containing actions (functions performing computations) and information (the internal state of the object). Functional programming on the other hand lacks the concept of an internal state and you deal only with functions which take an input, crunch the numbers, and then produce an output. The typical example is FORTRAN: FORmula TRANslation (from higher level human understandable syntax into zeroes and ones understandable by a machine).

The second direction one can start category theory is intuitionistic logic and the foundation of set theory. The problem of naive set theory is that one can create paradoxes like Russel's paradox: the set of all sets which are not members of themselves. The solution Russel proposed was type theory. Types introduce structure to set theory preventing self-referential constructions. In computer programming, types are semantic rules which tell us how to understand various sequences of zeros and ones from computer memory as integers, boolean variables, etc.

In intuitionistic logic statements are not true by simply disproving their falsehood, but they are true by providing an explicit construction. Truth must be computed and the parallel with computer programming is obvious. There is a name for this relationship, the Curry-Howard isomorphism. The mathematical formalism needed to rigorously spell out this correspondence is category theory. At high level:
  • proofs are programs
  • propositions are types
More important is that we can attach logical and programming meaning to category theory constructions which helps dramatically reduce the difficulty of category theory to that of elementary linear algebra. 

There are two additional key points I want to make. First category theory ignores the structure of the objects: they can be sets, topological spaces, posets, even physical systems. As such uniqueness is relaxed in category theory and things are unique up to isomorphisms. Second, we are strengthening uniqueness by seeking universal propertiesThis gives category theory its abstract flavour: the generalization of standard mathematical concepts in category theory involve diagrams which must commute. The typical definition is something like: "if there is an "impostor" which claims to have the same properties as the concept being defined, then there exist a so and so isomorphism such that a certain diagram commutes which guarantees that the impostor is nothing but a restatement of the same concept up to isomorphism". Next time I will talk about the first key definition we need from category theory, that of a product, and by flipping the arrows that of a coproduct.