What we want from a geometry over ${\mathbb{F}_1}$

From now on the main object (the irony of this terminology will become clear) of study is ${\mathbb{F}_1}$. I will first elaborate on what Kapranov-Smirnov calls the “folklore imagery”, trying to understand the motivation behind a statement like

A vector space of ${\mathbb{F}_1}$ is just a set.

Afterwards I will draw some conclusions from these observations, motivating some of the approaches outlined in Mapping ${\mathbb{F}_1}$-land.

Demystifying the ${\mathbb{F}_1}$-lore

In Projective geometry over ${\mathbb{F}_1}$ and the Gaussian binomial coefficient a nice build-up is given. I will summarise and give my own viewpoint.

We already know that ${\mathbb{F}_1}$ doesn’t exist as a field (because we need ${0\neq 1}$ by the axioms). But what if we loosen up the axioms and say the trivial ring should be taken as ${\mathbb{F}_1}$? In that case modules over this ring are in relationship with vector spaces over ${\mathbb{F}_1}$. But there are only trivial modules over the trivial ring, hence no nontrivial vector spaces. This approach obviously doesn’t work. A conclusion I like to make after vastly generalising this approach:

Setting ${\mathbb{F}_1}$ to be something doesn’t work.

The idea of looking at vector spaces looks promising though: given a field it is the most obvious structure built upon it, so we want to make sense of it over ${\mathbb{F}_1}$. We know that the cardinality of an ${n}$-dimensional vector space ${V}$ over a finite field ${\mathbb{F}_q}$ is ${q^n}$. Applying this to ${\mathbb{F}_1}$ we get a single point, for all ${n}$. So far so good, but we also know that a basis for ${V}$ consists of ${n}$ elements. The problem with this approach is that it’s too direct: we cannot construct objects over ${\mathbb{F}_1}$, we need to get there by an analogy that avoids contradictions like this. Applying this for instance to ${\mathbb{F}_1[t]}$ we see that this doesn’t yield any satisfying definition either. The conclusion after another round of generalisation:

Look at induced objects, not constructions.

The same applies to noncommutative geometry by the way. But let’s focus on ${\mathbb{F}_1}$ for the moment.

So we need a simple object over ${\mathbb{F}_1}$ where we can avoid an explicit construction, getting facts about that object only by analogy without running into contradictions. Let’s try this for ${\mathbb{P}^n/\mathbb{F}_1}$. The construction of this over an actual field ${k}$ consists of constructing ${\mathbb{A}^{n+1}/k}$ (an ${n+1}$-dimensional vector space over ${k}$) and setting ${\mathbb{P}^n/k}$ to be the set of lines through the origin.

If we take ${k=\mathbb{F}_q}$ the number of points in ${\mathbb{A}^{n+1}/\mathbb{F}_q}$ is ${q^{n+1}}$. The number of lines through the origin is ${(q^{n+1}-1)/(q-1)}$: just take any point in ${\mathbb{A}^{n+1}/\mathbb{F}_q\setminus\left\{ 0 \right\}}$, this defines a line through the origin, but there are ${q}$ points (including the origin) on this line, therefore we divide by ${q-1}$. If we write down the polynomial function counting the number of points in ${\mathbb{P}^n/\mathbb{F}_q}$ (i.e., ${q^n+q^{n-1}+\ldots+1}$) and evaluate in ${q=1}$ we see something that leads to a nontrivial object! To make this analogy really sound you have to write down what you actually want from a finite projective space and axiomatise it. But let’s rejoice for now, and conclude

The projective space ${\mathbb{P}^n/\mathbb{F}_1}$ contains ${n+1}$ points.

What we have actually done is changing the construction of ${\mathbb{P}^n/\mathbb{F}_1}$ from an algebraic-geometric viewpoint to a combinatorial-geometric viewpoint, something that is quintessential in finite geometry (and I guess I’m the angs+’er with the most background in this kind of stuff :)). If people are interested in a write-up about this, I’d be happy to provide one but I suspect my fellow seminarians are not that into finite geometry and combinatorics. The only important observation we need to make is that a line in ${\mathbb{P}^n/\mathbb{F}_1}$ contains exactly two points in this sense, as a line in ${\mathbb{P}^n/\mathbb{F}_q}$ contains ${q+1}$ points by the axioms of a combinatorial-geometric projective space.

Now taking ${\mathbb{F}_1}$-vector spaces to be sets actually makes sense: ${\mathbb{A}^{n+1}/\mathbb{F}_1}$ is an ${(n+1)}$-set, taking any point as the distinguished base point (or origin) and considering “lines” through the origin, or more appropriately ${2}$-subsets, of which we have exactly ${n}$ as we started with ${n+1}$ elements and fixed an origin. But this (inherently geometric) idea of “fixing an origin” has its downsides in what follows, where we will adjoin an origin which is more in the sense of algebra as adding a zero vector (which doesn’t exist over ${\mathbb{F}_1}$) doesn’t change the dimension or cardinality of a base for a vector space.


Now we can switch our attention to Kapranov’s and Smirnov’s unfinished paper. If Wittgenstein were an algebraic geometer interested in ${\mathbb{F}_1}$-geometry I guess he would have written a Tractatus Absoluto-Geometricus, which could have looked (in a very crude sense) like this

  1. 1 Geometry over ${\mathbb{F}_1}$ can only be understood through induced objects.
  2. 2 Vector spaces over ${\mathbb{F}_1}$ are plain sets.
    1. 2.1 Dimension equals cardinality.
    2. 2.2 ${\mathrm{GL}_n(\mathbb{F}_1)=\mathrm{S}_n}$.
    3. 2.3 ${\mathrm{SL}_n(\mathbb{F}_1)=\mathrm{A}_n}$.
    4. 2.4 ${\det\colon\mathrm{GL}_n(\mathbb{F}_1)\rightarrow\mathbb{F}_1^\times}$ is the sign homomorphism.
    5. 2.5 The Grassmannian ${\mathrm{Gr}(k,n)(\mathbb{F}_1)}$ is the set of ${k}$-subsets.
    6. 2.6 There is no harm in formally adjoining a zero vector in a ${\mathbb{F}_1}$-vector space turning it into a pointed set, just be careful with interpretations.
  3. 3 The polynomial ring ${\mathbb{F}_1[t]}$ can only be understood through its automorphisms.
    1. 3.1 Polynomial automorphisms are a generalisation of field automorphisms by evaluation at “zero”.
    2. 3.2 We have ${\mathrm{GL}_n(\mathbb{F}_1[t])\rightarrow\mathrm{GL}_n(\mathbb{F}_1)}$.
    3. 3.3 ${\mathrm{GL}_n(\mathbb{F}_1[t])=\mathrm{B}_n}$ the braid group on ${n}$ strings, by analogy of the canonical ${\mathrm{B}_n\rightarrow\mathrm{S}_n}$.
    4. 3.4 For more information I refer you to the grandmaster himself and his blog post ${\mathbb{F}_1}$ and braid groups.
  4. 4 Finite fields have finite extensions and algebraic closures and so does ${\mathbb{F}_1}$.
    1. 4.1 ${\mathbb{F}_{1^n}}$ as a vector space over ${\mathbb{F}_1}$ is the (pointed) set ${\mu_n}$ consisting of the ${n}$-th roots of unity and an adjoined zero (which will serve as the point of the pointed set).
    2. 4.2 By choosing a primitive root in ${\mu_n}$ we get a non-canonical isomorphism ${\mathrm{C}_n\cong\mu_n}$.
    3. 4.3 ${\mathbb{A}^1/\mathbb{F}_1}$ as a scheme is ${\mathrm{Spec}\,\mathbb{F}_1[t]}$.
    4. 4.4 ${\mathrm{Spec}\,\mathbb{F}_1[t]}$ describes the algebraic closure of ${\mathbb{F}_1}$.
    5. 4.5 ${\overline{\mathbb{F}_1}}$ therefore corresponds to ${\mathrm{\mu}_\infty\cup\left\{ 0 \right\}}$.
  5. 5 We can generalise linear algebra to finite extensions.
    1. 5.1 The action of ${\mathbb{F}_{1^n}}$ on ${V}$ is the action of ${\mu_n}$ on ${V\setminus\left\{ 0 \right\}}$.
    2. 5.2 ${\mathbb{F}_{1^n}}$-vector spaces are nothing but ${\mu_n}$-sets.
    3. 5.2 A ${d}$-dimensional ${\mathbb{F}_1}$-space contains ${dn}$ points.
    4. 5.4 The ${\mu_n}$-action is strictly multiplicative, there is no additive structure.
    5. 5.5 The lack of additive structure agrees with the notion of vector spaces as (pointed) sets.
    6. 5.6 A ${\mathbb{F}_{1^n}}$-basis is a set containing a representative of every orbit under the ${\mu_n}$-action.
  6. 6 We can interpret other finite objects over ${\mathbb{F}_1}$.
    1. 6.1 ${\mathbb{F}_q}$ is a ${\mathbb{F}_{1^n}}$-vector space if ${q\equiv 1\bmod n}$ because ${(\mathbb{F}_q^\times,\cdot)\cong(\mathrm{C}_{nd},\cdot)}$ for ${d=(q-1)/n}$, and therefore a ${\mathbb{F}_{1^n}}$-algebra.
    2. 6.2 This vector space structure induces the (necessarily unique) ${\mathbb{F}_{1^n}}$-vector space structure on ${\mathbb{F}_q^e}$ of dimension ${ed}$.
    3. 6.3 All development of techniques should coincide with this observation.
    4. 6.4 For a construction of exact sequences over ${\mathbb{F}_1}$ I refer you to Absolute linear algebra.

For the real Wittgenstein aficionado, my apologies for not hitting the right tone and ideas. This post mostly served as a way for me to get all my ${\mathbb{F}_1}$-folklore straight, so that I can explain to a complete outsider why stuff in the ${\mathbb{F}_1}$ is taken as what it is. If you feel any gaps present, please tell me so.

If you followed me this far you should be familiar enough with the ideas and twists of mind necessary to understanding some parts of Kapranov-Smirnov, or you have made up your mind and will take all ${\mathbb{F}_1}$-stuff to be dadaist nonsense. But before tackling Kapranov-Smirnov I suggest you read the series of posts developing the same theme as this one did but in greater depth:

  1. The ${\mathbb{F}_1}$ folklore
  2. Absolute linear algebra
  3. ${\mathbb{F}_1}$ and braid groups

For the next (and last) post of this series, the main idea will be Stuff over ${\mathbb{F}_1}$ contains multiplicative structure, not additive structure.

No bollocks, just rings

Today (one of) the goals of this blog will be neglected, and we’ll focus solely on its URL. No $\mathbb{F}_{1}$ or Riemann hypothesis, just some old-fashioned ring theory.

Misplaced Poetics

I’d like write up one of (Jacobson-) Herstein’s commutativity theorems in noncommutative ring theory. It’s a beautiful testament to the power of subdividing rings into nontrivial classes, and is filled to the brim with small, simple, but extremely elegant ideas. Part of the theorem’s appeal stems from the fact that once confronted with its statement, you’re bound to ask: who cares?! It seems so useless, and that somehow makes it all the more fun. Of course, there was a good reason for proving the theorem, but I’ll just leave you to ponder some applications. I’m in a “l’art pour l’art”-mood, hoping Poe doesn’t turn in his grave.


Without further ado, here’s the theorem I’ve been going on about:

Theorem. A ring $R$ is commutative iff for any $a,b \in R$, there exists an integer $n(a,b)>1$ such that $(ab-ba)^{n(a,b)}=ab-ba$.

Of course, your first idea, like everyone else’s, is that this can be proven by (possible) extreme amounts of formula bashing. I encourage you to try this, as I have, and get very tired (and a tad bit frustrated) after wasting the better part of a Friday morning (and wacking half a tree). If you do manage to find a proof by pure calculation, let me know! The way we’ll tackle the theorem is by a reduction argument along the following lines:

  • If $R$ is a division ring: see the next paragraph
  • If $R$ is a left primitive ring: reduce to the previous case
  • If $R$ is a semiprimitive ring: reduce to the previous case
  • If $R$ is a (general) ring: reduce to the previous case

Just lovely, isn’t it.

Thé lemma

Crucial for proving the first part of the theorem is the following lemma, which will be applied in the next paragraph.

Lemma. Let $D$ be a division ring with nonzero characteristic. If $a$ is a non-central, periodic element in $D^{*}$, then there exists an additive commutator $b$ in $D^{*}$ such that $bab^{-1}=a^{i} \neq a$, for some $i>0$.

Taking $F_{p}$ to be the prime subfield of $D$, the $F_{p}$-algebra generated by $a$ is a finite subfield of $D$, of dimension $n$ over $F_{p}$, which we’ll denote $K$. The order of this field is $p^{n}$, and $a^{p^{n}}=a$. Since $a$ isn’t central, the inner derivation $\delta$, defined by $\delta(x)=ax-xa$, is non-zero. You can easily check that it is however a $K$-linear mapping of the $K$-vector space $D$. Using the Frobenius, one sees that $\delta^{p^{n}}=\delta$. Since any element $b \in K$ satisfies $b^{p^{n}}=b$, the following equation holds true $$t^{p^{n}} -t= \prod_{b \in K}(t-b) \in K[t].$$ Substituting $\delta$ into this equation, and remembering that $\delta \neq 0$ and all mono’s are left cancellable, there has to exist a $b_{0} \in K^{*}$ such that $\delta-b_{0}$ is not a mono. This means $\delta$ has an eigenvector $d$ with eigenvalue $b_{0}$. Using the definition of $\delta$, it follows that $dad^{-1}=a-b_{0} \in K \ \backslash \{a\}$. Since $a$ and $dad^{-1}$ have the same order in the cyclic group $K^{*}$, they generate the same subgroup, and $dad^{-1}=a^{i} \neq a$. To make sure $d$ is an additive commutator, replace it by $\delta(d)$, and you’ll see that the equation still holds up.

Division rings

Supposing $R$ is a division ring which doesn’t equal its centre $F$, you can easily see that there must exist an additive commutator which isn’t central, say $a=bb’-b’b$. Taking an element $c$ in $F^{*}$, a quick calculation shows that $ca$ is an additive commutator which can’t be central. The assumptions then imply that there is a number $k$ such that $1=a^{k}=(ca)^{k}=c^{k}a^{k}$, so any element in $F^{*}$ is periodic, and $D$ has non-zero characteristic. Using the theorem in the previous paragraph on $a$, an additive commutator $y$ exists, such that the group generated by $a$ and $y$ is a finite, periodic subgroup of $D^{*}$. This means the group has to be cyclic, which contradicts the statement $yay^{-1} \neq a$, and $R$ is commutative.

The proof of the pudding …

Let’s finish the proof. Suppose $R$ is a left primitive ring. The structure theorem for left primitive rings (basically a revamped version of Jacobson & Chevalley’s density theorem), says $R$ is isomorphic to a matrix ring over a skew field $D$, or $R$ has an infinite number of subrings $R_{m}$, each one having $M_{m}(D)$ as a quotient. Supposing $m>1$, and denoting by $E_{ij}$ the matrix with a $1$ in the $ij$-place and zeroes everywhere else, the obvious identity $$E_{11} E_{12} – E_{12} E_{11}=E_{12}$$ gives us a contradiction (just raise it to a power greater than $1$). This means $R \cong D$, and we reduced it to a previous case.

If our ring is semiprimitive, the Jacobson radical, denoted $J(R)$, is zero. Given a left primitive ideal $M_{i}$, take the quotient ring $R/M_{i}$, which is obviously left primitive, and inherits the property in the theorem because it’s a surjective image of $R$. This means each one of these rings is commutative. Now $R$ can be embedded in the product of all these rings, since the Jacobson radical, which is equal to the intersection of all left primitive ideals, is zero. Thus $R$ is commutative.

For a random ring, $R/J(R)$ is semiprimitive, and thus commutative. For $a, b$ in the ring, we know there exists an $n$ such that $(ab-ba)(1-(ab-ba)^{n-1})=0$. Since $ab-ba$ sits in $J(R)$, we know $(1-(ab-ba)^{n-1})$ is invertible, and $ab-ba=0$. Voilà!

More of this fun stuff can be found in Lam’s A first course in noncommutative rings and Lectures on modules and rings.


Prep-notes dump

Here are the scans of my rough prep-notes for some of the later seminar-talks. These notes still contain mistakes, most of them were corrected during the talks. So, please, read these notes with both mercy and caution!

Hurwitz formula imples ABC : The proof of Smirnov’s argument, but modified so that one doesn’t require an $\epsilon$-term. This is known to be impossible in the number-theory case, but a possible explanation might be that not all of the Smirnov-maps $q~:~\mathsf{Spec}(\mathbb{Z}) \rightarrow \mathbb{P}^1_{\mathbb{F}_1}$ are actually covers.

Frobenius lifts and representation rings : Faithfully flat descent allows us to view torsion-free $\mathbb{Z}$-rings with a family of commuting Frobenius lifts (aka $\lambda$-rings) as algebras over the field with one element $\mathbb{F}_1$. We give several examples including the two structures on $\mathbb{Z}[x]$ and Adams operations as Frobenius lifts on representation rings $R(G)$ of finite groups. We give an example that this extra structure may separate groups having the same character table. In general this is not the case, the magic Google search term is ‘Brauer pairs’.

Big Witt vectors and Burnside rings : Because the big Witt vectors functor $W(-)$ is adjoint to the tensor-functor $- \otimes_{\mathbb{F}_1} \mathbb{Z}$ we can view the geometrical object associated to $W(A)$ as the $\mathbb{F}_1$-scheme determined by the arithmetical scheme with coordinate ring $A$. We describe the construction of $\Lambda(A)$ and describe the relation between $W(\mathbb{Z})$ and the (completion of the) Burnside ring of the infinite cyclic group.

Density theorems and the Galois-site of $\mathbb{F}_1$ : We recall standard density theorems (Frobenius, Chebotarev) in number theory and use them in combination with the Kronecker-Weber theorem to prove the result due to James Borger and Bart de Smit on the etale site of $\mathsf{Spec}(\mathbb{F}_1)$.

New geometry coming from $\mathbb{F}_1$ : This is a more speculative talk trying to determine what new features come up when we view an arithmetic scheme over $\mathbb{F}_1$. It touches on the geometric meaning of dual-coalgebras, the Habiro-structure sheaf and Habiro-topology associated to $\mathbb{P}^1_{\mathbb{Z}}$ and tries to extend these notions to more general settings. These scans are unintentionally made mysterious by the fact that the bottom part is blacked out (due to the fact they got really wet and dried horribly). In case you want more info, contact me.

$\mathbb{F}_1$ and noncommutative geometry

why noncommutative geometry?

Some motivate noncommutative geometry as follows : assume you have a space (or variety) $X$ on which a group $G$ acts wildly so that the ‘orbit-space’ $X/G$ does not exists or has bad topological properties. Let $A$ be the ring of continuous functions on $X$ (or the coordinate ring $\mathcal{O}(X)$), then every $g \in G$ acts as an automorphism $\alpha_g$ on $A$.

Traditionally one associates the orbit-space (when possible) to the commutative fixed-point algebra $A^G$. However, when this algebra is too small to give information on the $G$-orbits in $X$ one can still associate a noncommutative algebra to the situation, the crossed product algebra $A \ast G$ which as a vectorspace is merely $A \otimes \mathbb{C} G$ but with multiplication induced by $(a \otimes g) (b \otimes h) = a \alpha_g(b) \otimes g h$. Some argue that ringtheoretical invariants of $A \ast G$ give some insight into the horrible orbit-space $X/G$.

relevant to $\mathbb{F}_1$-geometry?

We’ve defined an algebra $A$ over $\mathbb{F}_1$ to be a torsion-free $\mathbb{Z}$-ring having a commuting family of endomorphisms $\psi^n~:~A \rightarrow A$ having the property that for every prime number $p$ the endomorphism $\Psi^p$ is a lift of the Frobenius map on $A/pA$. This gives an action by endomorphisms of the multiplicative monoid $\mathbb{N}_{\times}$ on $A$.

We’ve interpreted this additional structure as descent-data from $\mathbb{Z}$ to $\mathbb{F}_1$. Now, in the case of Galois-descent between two fields $k \subset K$ with $Gal(K/k)=G$, the $k$-algebra corresponding to a $K$-algebra $A$ with descent-data $G \rightarrow Aut(A)$ is, of course, the fixed-point algebra $A^G$.

Of course, in the $\mathbb{F}_1$-setting it makes no sense to look at the fixed-point ring $A^{\mathbb{N}_{\times}}$, but we can still consider the corresponding noncommutative ring

$A \ast \mathbb{N}_{\times}$

which as a $\mathbb{Z}$-module is the tensor-product $A \otimes_{\mathbb{Z}} \mathbb{Z} [\mathbb{N}_{\times}]$ where $\mathbb{Z} [\mathbb{N}_{\times}]$ is the monoid-algebra of the commutative monoid $\mathbb{N}_{\times}$. As above, the multiplication is induced by the rule (using the variables $X_n = 1 \otimes n$)

$(a X_n) (b X_m) = a \Psi^n(b) X_{mn}$

If you are a lowly ringtheorist this is already daunting enough because the fact that the crossing is made with endos rather than autos kills most of the desired properties of your noncommutative ring (for example Noetherianness). But, if your a $C^{\ast}$-algebraist then you want to complicate matters even more as you need variables $X_n^{\ast}$ corresponding to the $X_n$ satisfying suitable properties. If this is possible, we will denote the noncommutative algebra generated by $A$, the $X_n$ and the $X_n^*$ by $A \circ \mathbb{N}_{\times}$.

the giant mashup-algebra $\mathbb{Z}[\mathbb{Q}/\mathbb{Z}] \circ \mathbb{N}_{\times}$, aka BC

Lots of papers are written trying to get novel insights into the BC-algebra by looking at its adelic-, motivic-, semi-hemi-demi-, p-adic-, $\mathbb{F}_1$-gadgety or whatever-comes-next interpretation. It is the archetypical example of the above construction.

Let’s define it by generators and relations using its ‘integral’ incarnation. Generators are $e(r)$, one for each $r \in \mathbb{Q}/\mathbb{Z}$ and elements $\tilde{\mu}_n$ and $\mu_n^*$ for $n \in \mathbb{N}_+$. The relations are

$e(r) e(s) = e(r+s)~\forall r,s \in \mathbb{Q}/\mathbb{Z}$

$\tilde{\mu}_n \tilde{\mu}_m = \tilde{\mu}_{nm}~\forall n,m \in \mathbb{N}_+$

$\mu_n^* \mu_m^* = \mu^*_{nm}~\forall n,m \in \mathbb{N}_+$

$\mu_n^* \tilde{\mu}_n = n~\quad \text{and} \quad \tilde{\mu}_n \mu^*_m = \mu^*_m \tilde{\mu}_n~\quad~\text{whenever} \quad (m,n)=1$

$\mu^*_n e(r) = e(nr) \mu^*_n~\forall r \in \mathbb{Q}/\mathbb{Z}, n \in \mathbb{N}_+$

$e(r) \tilde{\mu}_n = \tilde{\mu}_n e(nr)~\forall r \in \mathbb{Q}/\mathbb{Z}, n \in \mathbb{N}_+$

$\tilde{\mu}_n e(r) \mu^*_n = \sum_{ns=r} e(s)~\forall r \in \mathbb{Q}/\mathbb{Z}, n \in \mathbb{N}_+$

The first relations imply that the $\mathbb{Z}$-ring generated by the $e(r)$ is the integral group-ring $\mathbb{Z}[\mathbb{Q}/\mathbb{Z}]$. Taking $e(r) \mapsto e^{2 \pi i r}$ we see that this ring is isomorphic to the integral group-ring $\mathbb{Z}[\pmb{\mu}_{\infty}]$ of the multiplicative group of all roots of unity.

$\mathbb{Z}[\pmb{\mu}_{\infty}]$ is a $\lambda$-ring (actually, our best shot at the algebraic closure $\overline{\mathbb{F}}_1$) with endomorphisms $\Psi^n(e^{2 \pi i r}) = e^{2 \pi i nr}$ (which correspond to the endomorphisms $e(r) \mapsto e(nr)$ in $\mathbb{Z}[\mathbb{Q}/\mathbb{Z}]$).

Hence, we see that the subring generated by the $e(r)$ and the $\mu_n^*$ is actually isomorphic to the noncommutative crossed product $\mathbb{Z}[\pmb{\mu}_{\infty}] \ast \mathbb{N}_{\times}$ constructed before. The full BC-algebra is then what we have denoted $\mathbb{Z}[\pmb{\mu}_{\infty}] \circ \mathbb{N}_{\times}$.

More information on the (classical) BC-algebra can be found in these neverendingbook-posts : as a giant mash-up of arithmetical information and its relation to the Riemann zeta-function.

In view of the Borger-de Smit result characterizing the etale site of $\mathsf{Spec}(\mathbb{F}_1)$ it is perhaps interesting to consider the multi-variate BC-algebras $\mathbb{Z}[\pmb{\mu}_{\infty}] \otimes \cdots \otimes \mathbb{Z}[\pmb{\mu}_{\infty}] \circ \mathbb{N}_{\times}$ defined in the now obvious way.

More food-for-thought : take your favorite torsion free $\mathbb{Z}$-ring $A$ and construct your own BC-lookalike algebra $W(A) \circ \mathbb{N}_{\times}$ making clever use of the Adams operations $\Psi^n$ and the ‘Verschiebung’-operations on the ring of big Witt vectors $W(A)$.

On On$_2$

In the previous post in this series I promised to do something with fields with characteristic two, and instead I did weird things with surreal numbers and ordinals. Neither of them has characteristic two, because we used the wrong arithmetic. In this post, I will give three new definitions of addition and multiplication in On, and prove that they are actually the same. This will turn On into a field of characteristic two, which we shall call On2. From know on, we distinguish the ordinary operations from those in On2 by the use of square brackets. All expressions between $[$ and $]$ are meant in the sense of ordinary arithmetic.

Arithmetic in On2

Simplicity rules

The most obvious and at the same time unusual way of defining an addition is by starting from zero and working up. We will find the simplest addition and multiplication which make On into a field.

There is no reason why $0+0$ cannot be $0$, because there are fields (any field) with an element satisfying $x+x=x$. This is the first entry in our addition-table. This implies that $0$ must be the zero element, so we must have $0+\alpha=\alpha+0=\alpha$ for all $\alpha$. The first row and column are already filled. What about $1+1$? The least possible answer is $0$, which gives us characteristic two. Next is $1+2$. This cannot be $0$, $1$ or $2$, so we must take $3$. We can go on like this, and make sure that $\alpha + \beta$ is compatible with $\alpha’ + \beta$, $\alpha + \beta’$ and $\alpha’ + \beta’$ (with $\alpha’ < \alpha$ and $\beta’ < \beta$).

We do the same for multiplication. $0.\alpha$ can be $0$, so $0$ must be the zero of the field. Because $1.1 = 1$ is possible, $1$ is the one. The first two rows and columns are filled with $0.\alpha$, $\alpha.0$, $1.\alpha$ and $\alpha.1$. It is obvious that $2.2$ cannot be $0$, $1$ or $2$. Since there are fields (e.g. $\mathbb{F}_4$) with elements that satisfy $x^2 = x + 1$, $3$ is possible. Note that the product has to be compatible with previous entries and with the whole addition-table.


This definitions are rather difficult to work with, because we must prove a theorem every time we want to fill in an entry. Besides, it is not obvious that these definitions really define a field.

Inductive definitions

Define the minimal excluded number $\DeclareMathOperator{\mex}{mex}\mex(S)$ as the least ordinal not in the set $S$. This can be used for the following inductive definitions:

  • $\alpha + \beta = \mex(\alpha’ + \beta, \alpha + \beta’)$
  • $\alpha\beta = \mex(\alpha’\beta + \alpha\beta’ + \alpha’\beta’)$

It is easy to verify that $\alpha + \alpha = 0$, because $\alpha + \alpha’$ cannot be zero. We can now prove that this definitions are equivalent to the former.

If $\alpha + \beta < \mex(\alpha’ + \beta, \alpha + \beta’)$, there exists an $\alpha’ < \alpha$ such that $\alpha + \beta = \alpha’ + \beta$. This implies $\alpha = \alpha’$, which is impossible. Therefore, $$\alpha + \beta \geq \mex(\alpha’ + \beta, \alpha + \beta’).$$

If $\alpha\beta < \mex(\alpha’\beta + \alpha\beta’ + \alpha’\beta’)$, there exist $\alpha’$ and $\beta’$ so that $\alpha\beta = \alpha’\beta + \alpha\beta’ + \alpha’\beta’$. This is equivalent to $$\begin{gather}\alpha\beta + \alpha’\beta + \alpha\beta’ + \alpha’\beta’ = 0 \\ (\alpha + \alpha’)(\beta + \beta’) = 0 ,\end{gather}$$ which implies $\alpha = \alpha’$ or $\beta = \beta’$. Both are impossible. It follows that $$\alpha\beta \geq \mex(\alpha’\beta + \alpha\beta’ + \alpha’\beta’).$$

If we can prove that these inductive definitions form a field, it must be the smallest possible field, as defined before. This is a standard verification.


The inductive definition of the sum is known as nim-addition (frequently used in the theory of the game of Nim). An easy rule to perform nim-addition is:

  1. The nim-sum of a number of distinct $2$-powers is their ordinary sum.
  2. The nim-sum of two equal numbers is 0.

This rule allows us to compute the nim-sum of finite and infinite ordinals. A similar rule for nim-multiplication is:

  1. The nim-product of a number of distinct Fermat $2$-powers (numbers of the form $2^{2^n}$) is their ordinary product.
  2. The square of a Fermat $2$-power is its sesquimultiple (multiplying by $\frac{3}{2}$ in the ordinary sense).

Unfortunately, this rule applies only to finite ordinals. A more general rule is explained at neverendingbooks.

Groups in On2

The ordinals that are groups are precisely the $2$-powers. This can be proved with the simplest extension theorems.

Theorem 1. If $\Delta$ is not a group (under addition), then $\Delta = \alpha + \beta$, where $(\alpha, \beta)$ is any lexicographically earliest pair of numbers in $\Delta$ whose sum is not in $\Delta$.

Theorem 2. If $\Delta$ is a group, we have $[\Delta\alpha] + \beta = [\Delta\alpha + \beta]$, for all $\alpha$, and all $\beta \in \Delta$.

If $\Delta$ is a group, and $\Gamma$ is a group with $\Delta < \Gamma < [\Delta.2]$, we can write $\Gamma = [\Delta + \delta]$ with $\delta < \Delta$. This is a contradiction, because $\Gamma > \Delta + \delta$ follows from the inductive definitions, and $[\Delta + \delta]  = \Delta + \delta$ according to Theorem 2.

If $\alpha, \beta \in [\Delta.2]$, there are three possible cases.

  1. $\alpha < \Delta$ and $\beta < \Delta$. Then $\alpha + \beta < \Delta < [\Delta.2]$
  2. $\alpha \geq \Delta$ and $\beta < \Delta$. By Theorem 2:
    $\alpha + \beta = [\Delta + \delta] + \beta = \Delta + \delta + \beta = \Delta + \delta’ = [\Delta + \delta'] < [\Delta.2]$
  3. $\alpha \geq \Delta$ and $\beta \geq \Delta$. By Theorem 2 and $\alpha + \alpha = 0$:
    $\alpha + \beta = [\Delta + \delta] + [\Delta + \delta'] = \Delta + \Delta + \delta + \delta’ = \delta + \delta’ < \Delta < [\Delta.2]$

This proves that if $\Delta$ is any group, then the next group is $[\Delta.2]$. Because $2$ is a group, it follows that the groups are the $2$-powers. This justifies the rule for the calculation of nim-sums.

Fields in On2

Similar theorems exist for fields in On2. Complete proofs can be found in Conway’s On Numbers and Games.

Theorem 3. If $\Delta$ is a group but not a ring, then $\Delta = \alpha\beta$, where $(\alpha, \beta)$ is any lexicographically earliest pair of numbers in $\Delta$ whose product is not in $\Delta$.

Theorem 4. If $\Delta$ is a ring but not a field, then $\Delta = \alpha^{-1}$, where $\alpha$ is the earliest non-zero number in $\Delta$ which has no inverse in $\Delta$.

Theorem 5. If $\Delta$ is a field but not algebraically closed, then $\Delta$ is a root of the lexicographically earliest polynomial having no root in $\Delta$.

Finite ordinals

We will prove by induction that the finite ordinals that are fields are precisely the Fermat $2$-powers. We suppose that the following statements are true for $n$, and prove them for $n + 1$:

  1. $[2^{2^n}]$ is a field
  2. $[2^{2^{n-1}}]^2 = [\frac{3}{2}2^{2^{n-1}}]$
  3. $x^2 + x$ takes precisely the values $0, 1, \dotsc, [2^{2^n-1}-1]$ as $x$ varies in $[2^{2^n}]$

The lexicographically earliest irreducible polynomial over $[2^{2^n}]$ is $x^2 + x = [2^{2^n-1}]$, because $x^2 = \alpha$ always has a root in finite field of characteristic $2$, and $x^2 + x = \alpha$ has a root for earlier $\alpha$ according to statement 3. We know by Theorem 5 that $[2^{2^n}]$ is a root of $x^2 + x = [2^{2^n-1}]$, hence $$\textstyle [2^{2^n}]^2 = [2^{2^n}] + [2^{2^n-1}] = [2^{2^n} + 2^{2^n-1}] = [\frac{3}{2}2^{2^n}].$$ We obtain the field $[2^{2^{n+1}}]$ as a vector space over $[2^{2^n}]$ with typical element $X = [2^{2^n}]x + y$. We examine the polynomial $$\begin{align}X^2 + X &= ([2^{2^n}]x + y)^2 + [2^{2^n}]x + y \\ &= [2^{2^n}]^2 x^2 + y^2 + [2^{2^n}]x + y \\ &= [2^{2^n}](x^2 + x) + ([2^{2^n-1}]x^2 + y^2 + y).\end{align}$$ By induction,  $x^2 + x$ can take any value in $[2^{2^n-1}]$. Note that $x^2 + x$ remains unchanged when we replace $x$ by $x + 1$. The same is true for $y^2 + y$. It follows that $[2^{2^n-1}]x^2 + y^2 + y$ can be made to take any value in $[2^{2^n}]$ without affecting the value of $x^2 + x$. This implies that the values of $X^2 + X$ can be written as $[2^{2^n}]\alpha + \beta$, where $\alpha < [2^{2^n-1}]$ and $\beta < [2^{2^n}]$, which are precisely the values less than $[2^{2^{n+1}-1}]$.

This and Theorem 6 justify the rule for the calculation of nim-products.

Infinite ordinals

Consider the sequence $$[\omega^{\omega^k}], [\omega^{\omega^k p_k}], [\omega^{\omega^k p_k^2}], \dotsc, [\omega^{\omega^k p_k^n}], \dotsc$$ where $p_k$ is the $(k+1)$’st prime. Then the following statements are true for each $k > 0$:

  1. Each term in the sequence is a field
  2. The field $[\omega^{\omega^k p_k^n}]$ is the union of all finite fields $\mathbb{F}_{2^{p_0^{n_0} p_1^{n_1} \dotsm p_k^{n_k}}}$ with $n_i < \omega$ for $0 \leq i \leq k – 1$ and $n_k \leq n$
  3. Each term is the $p_k$’th power of its successor, and $[\omega^{\omega^k}]$ is the $p_k$’th root of $\alpha_{p_k}$, which is the least number in $[\omega^{\omega^k}]$ with no $p_k$’th root in $[\omega^{\omega^k}]$.

We will prove this by induction on $k$.

$\boldsymbol{n = 0}$

$[\omega^{\omega^{k+1}}]$ is the union of all fields $[\omega^{\omega^k p_k^n}]$. It is obvious that this defines a field, and there are no fields in between. This proves statement 1, and statement 2 follows immediately. Because of Theorem 5, $[\omega^{\omega^{k+1}}]$ is the root of the lexicographically earliest polynomial having no root in $[\omega^{\omega^{k+1}}]$. If $f(x)$ is a polynomial of degree $d < p_{k+1}$, all coefficients are contained in a finite field $\mathbb{F}_{2^{p_0^{n_0} \dotsm p_k^{n_k}}}$. Therefore, the root of $f(x)$ is an element of the field $\mathbb{F}_{2^{p_0^{n_0} \dotsm p_k^{n_k} d}} = \mathbb{F}_{2^{p_0^{m_0} \dotsm p_k^{m_k}}}$, which is a subfield of $[\omega^{\omega^{k+1}}]$. It follows that the earliest irreducible polynomial is $x^{p_{k+1}} = \alpha_{p_{k+1}}$ with $\alpha_{p_{k+1}}$ as defined in statement 3.

$\boldsymbol{n > 0}$

Assume $\Gamma = [\omega^{\omega^{k+1} p_{k+1}^{n-1}}]$ is a field, and $\Delta$ is the lexicographically earliest algebraic extension. The field $\Gamma$ is not closed for polynomials of degree $p_{k+1}$, because $\mathbb{F}_{2^{p_{k+1}^n}}$ is a field extension of $\mathbb{F}_{2^{p_{k+1}^{n-1}}} \subset \Gamma$ of degree $p_{k+1}$, and $\mathbb{F}_{2^{p_{k+1}^n}}$ is not contained in $\Gamma$. This means that $[\Delta:\Gamma]$ is at most $p_{k+1}$. Therefore, every element $\alpha \in \Delta$ is the root of a polynomial $f(x)$ of degree $d \leq p_{k+1}$. By induction, all coefficients of $f(x)$ are contained in a field $\mathbb{F}_{2^{p_0^{n_0} \dotsm p_{k+1}^{n_{k+1}}}}$ with $n_{k+1} \leq n – 1$. It follows that the root of $f(x)$ is contained in $\mathbb{F}_{2^{p_0^{n_0} \dotsm p_{k+1}^{n_{k+1}}d}} = \mathbb{F}_{2^{p_0^{m_0} \dotsm p_{k+1}^{m_{k+1}}}}$ with $m_{k+1} \leq n$. If $d < p_{k+1}$, this is a subfield of $\Gamma$ and $f(x)$ is not irreducible. We can conclude that $[\Delta:\Gamma] = p_{k+1}$, and $\Delta = [\omega^{\omega^{k+1} p_{k+1}^{n}}]$. This proves statements 1 and 2.

If $f(x)$ is a polynomial $x^{p_{k+1}} = \alpha$ with $\alpha \in [\omega^{\omega^{k+1} p_{k+1}^{n-1}}]$, we know by induction that $\alpha$ is contained in $\mathbb{F}_{2^{p_0^{n_0} \dotsm p_{k+1}^{n_{k+1}}}}$ with $n_{k+1} \leq n – 1$. The root of $f(x)$ is thus contained in $\mathbb{F}_{2^{p_0^{m_0} \dotsm p_{k+1}^{m_{k+1}}}}$ with $m_{k+1} \leq n$, which is a subfield of $[\omega^{\omega^{k+1} p_{k+1}^{n}}]$. It follows that the earliest irreducible polynomial is $x^{p_{k+1}} = [\omega^{\omega^{k+1} p_{k+1}^{n-1}}]$. By Theorem 5, $[\omega^{\omega^{k+1} p_{k+1}^n}]$ is a root of this polynomial. This proves statement 3.

From this, we can conclude that $[\omega^{\omega^\omega}]$ is the algebraic closure of $2$.


The computation of $\alpha_p$ is not a trivial task. Conway did stop at $\alpha_7$. Hendrik Lenstra described an effective method in his paper On the algebraic closure of two, and computed $\alpha_p$ for $p \leq 43$. Lieven Le Bruyn extended the list.