On Natural Units and the Foundations of Mathematics

I spend a lot of time thinking about the connections between information theory and reality, and this led me to both a mathematical theory of epistemology and a completely new model of physics. I did work on related foundations of mathematics in Sweden back in 2019, but I tabled it, because the rest of the work was panning out incredibly well, and I was writing a large number of useful research notes. Frankly, I didn’t get very far in pure mathematics, other than discovering a new number related to Cantor’s infinite cardinals, which is a big deal and solves the continuum hypothesis, but short of that, I produced basically nothing useful.

Euler’s Identify is False

Recently I’ve had some more free time, and I started thinking about complex numbers again, in particular Euler’s Identity. I’m a graph theorist “by trade”, so I’m not keen on disrespecting what I believe to be a great mathematician, but Euler’s identity is just false. It asserts the following:

e^{i\pi } + 1 = 0.

I remember learning this in college and thinking it was a simply astonishing fact of mathematics, you have all these famous numbers connected through a simple equation. But iconoclast that I am, I started questioning it, specifically, setting x = i \pi, which implies that,

e^x = -1, and therefore,

x \log(e) = \log(-1), and so x =\log(-1).

This implies that,

e^x e^x = (-1)^2 = 1, and therefore, \log(e^x e^x) = \log(1).

Now typically, we assume that \log(1) = 0. However, applying this produces a contradiction, specifically, we find that,

\log(e^x e^x) = 0, which implies that x\log(e) + x\log(e) = 0, and therefore x = 0.

This implies that e^0 = -1, which contradicts the assumption that \log(1) = 0. That is, the exponent of e that produces 1 cannot be 0, since we’ve shown that e^0 = -1. Therefore, we have a contradiction, and so Euler’s identity must be false, if we assume \log(1) = 0.

A New Foundation of Mathematics

I proved the result above about a week ago, but I let it sit on the back burner, because I don’t want to throw Euler, and possibly all complex numbers, under the bus, unless I have a solution. Now I have a solution, and it’s connected to a new theory of mathematics rooted in information theory and what I call “natural units”.

Specifically, given a set of N binary switches, the number of possible states is given by 2^N. That is, if we count all possible combinations of the set of switches, we find it is given by 2 raised to the power of the cardinality of the set. This creates a connection between the units of information, and cardinality. Let’s assume base 2 logarithms going forward. Specifically, if S is a set, we assume the cardinality of S, written |S|, has units of cardinality or number, and \log(|S|) has units of bits. Though otherwise not relevant at the moment (i.e., there could be deeper connections), Shannon’s equation for Entropy also implies that the logarithm of a probability has units of bits. Numbers are generally treated as dimensionless, and so are probabilities, again implying that the logarithm always yields bits as its output.

The question becomes then, what value should we assign to \log(1)? Physically, a system with one state cannot be used to meaningfully store information, since it cannot change states, and as such, the assumption that \log(1) = 0 has intuitive appeal. I’m not aware of any contradictions that follow from assuming that \log(1) = 0 (other than Euler’s identity), so I don’t think there’s anything wrong with it, though this of course doesn’t rule out some deeply hidden contradiction that follows.

However, I’ve discovered that assuming \log(1) = I_0 \neq 0 implies true results. Physically, the assertion that \log(1) = I_0 \neq 0 is stating that, despite not having the ability to store information, a system with one state still carries some non-zero quantity of information, in the sense that it exists. As we’ll see, I_0 cannot be a real number, and has really unusual properties that nonetheless imply correct conclusions of mathematics.

If we assume that \log(1) = I_0, it must be the case that 2^{I_0} = 1. We can make sense of this by assuming that 2^x is defined over \mathbb{R}, other than at x = 0, where it is simply undefined. This makes physically intuitive sense, since you cannot apply an operator a zero number of times, and expect a non-zero answer, at least physically. To do something zero times is to do literally nothing, and so the result must be whatever you started with, which is not exactly zero, but it cannot produce change. Now you could argue I’ve just made up a new number, but so what? That’s precisely the point, because it’s more physically intuitive than standard axioms, and as we’ll show, it implies true results. Further, interestingly, it implies the possibility that all of these numbers are physically real (i.e., negative and complex numbers), though they don’t have any clear expression in Euclidean 3-space (e.g., even credits and debits are arguably better represented as positive magnitudes that have two directions). That is, the assumption is that things that exist always carry information, which is not absurd, physically, and somehow, it implies true results of mathematics.

Again, I_0 = \log(1), and so I_0 = \log(-1^2), which implies that \frac{I_0}{2} = \log(-1), and as such, 2^{\frac{I_0}{2}} = -1. If we consider \sqrt{2^{I_0}}, we will find two correct results, depending how we evaluate the expression. If we evaluate what’s under the radical first, we have \sqrt{1} = 1. If however we evaluate \sqrt{2^{I_0}} = (2^{I_0})^{\frac{1}{2}}, we instead have 2^{\frac{I_0}{2}} = -1, which is also correct. I am not aware of any number that behaves this way, producing two path-dependent but correct arithmetic results. Finally, because \frac{I_0}{2} = \log(-1), it follows that \frac{I_0}{4} = \log(i), and so 2^{\frac{I_0}{4}} = i, where i = \sqrt{-1}.

As a general matter, given \log(N), we have \log(1 N) = \log(1) + \log(N) = I_0 + \log(N). Exponentiating, we find 2^{I_0 + \log(N)} = 2^{I_0}2^{\log(N)} = \log(N), but it suggests that I_0 is an iterator, that gives numbers physical units, in that 2^{I_0} is not dimensionless, though it is unitary.

This is clearly not a real number, and I’m frankly not sure what it is, but it implies true results, though I am in no position to prove that it implies a consistent theory of arithmetic, so this is just the beginning of what I hope will be a complete and consistent theory of mathematics, in so far as is possible, fully aware that the set of theorems on integers is uncountable, whereas the set of proofs is countable.

Information, Fractional Cardinals, Negative Cardinals, and Complex Cardinals

In a paper titled Information, Knowledge, and Uncertainty, I presented a tautology that connects Information (I), Knowledge (K), and Uncertainty (U), as follows:

I = K + U.

The fundamental idea is that a system will have some quantity of information I that can be known about the system, and so everything I know about the system (K) plus what I don’t know about the system (U) must equal what can be known about the system. Specifically, we assume that I = \log(|S|), where S is the set of states of the system in question. This turns out to be empirically true, and you can read the paper to learn more. Specifically, I present two methods for rigorously calculating the values I, K and U, one is combinatorial, and the other is to use Shannon’s entropy equation for U. The results clearly demonstrate the equation works in practice, in addition to being philosophically unavoidable.

Because I will have units of bits, K and U must also have units of bits. Therefore, we can exponentiate the equation using sets S, S_K, and S_U, producing the following:

|S| = |S_K| |S_U|, where \log(|S|) = K + U, \log(|S_K|)  = K, and \log(|S_U|) = U.

Even if we restrict |S| to integer cardinalities, which makes perfect sense because it is the number of states the system in question can occupy, it is possible for either of S_K and S_U to have a rational number cardinality. The argument is, exponentiating by some number of bits produces cardinalities. Because both K and U have units of bits, regardless of their values, if we assume the relationship between information and number holds generally, it must be the case that there are cardinalities |S_K| and |S_U|. Because either could be a rational number, we must accept that rational cardinalities exist, given that the equation I = K + U is true, both empirically and philosophically. The same is true of negative cardinalities and complex cardinalities given the arguments above regarding I_0, though there seems to be an important distinction, which is discussed below.

Inconsistency between Assumptions Regarding the Logarithm

It just dawned on me, after writing the article, that the discussion above presents what seem to be two independent, and inconsistent axioms regarding the logarithm. Specifically, the exponentiated equation |S| = |S_K| |S_U| requires that \log(1) = 0 . As an example, let’s assume we’re considering a set of N boxes, one of which contains a pebble, and we’re interested in the location of the pebble. As described, this system has N possible states (i.e., locations of the pebble) and, therefore I = \log(N).

Now assume you’re told (with certainty) that the pebble is not in the first box. You are now considering a system with N-1 possible states, and so your uncertainty has been reduced. However, because this information doesn’t change the underlying system in any way, and in general, |S| cannot change as a result of our knowledge of the system, it must be the case that your Knowledge is given by K = \log(N) - \log(N-1), which is non-zero. We can then reasonably assume that S_U contains N - 1 states, and that |S_K| = 2^K. Now assume you’re told that all but one box has been eliminated as a possible location for the pebble. It follows that |S_U| = 1, and that U = \log(1). If \log(1) is not zero, I = K + U fails. Because it is a tautology, and empirically true, it must be the case that \log(1) = 0, which is plainly not consistent with the arguments above regarding I_0.

Now you could say I_0 is a bunch of garbage, and that’s why we have already found a contradiction, but I think that’s lazy. I think the better answer is that I = K + U governs representations, not physical systems, and is only true with regards to representations of physical systems. We can then conclude that I = K + U applies only to representations of physical systems, as an idea. Because I_0 is rooted in a physically plausible theory of the logarithm, we can say that this other notion of the logarithm governs physical systems, but does not govern representations of physical systems, since it clearly leads to a contradiction.

The question is then, as a matter of pure mathematics, are these two systems independent? If so, then we have something like the Paris Harrington Theorem. At the risk of oversimplification, the idea is that the mathematics that governs our reality in Euclidean 3-space could be different than the Platonic mathematics that governs representations, or perhaps ideas generally.

I’ll note that I = K + U is a subjective measure of information related to a representation of a system, in that while I is an objective invariant of a system, K and U are amounts of information held by a single observer. In contrast, I_0 is rooted in the physically plausible argument that if a thing exists in Euclidean 3-space (i.e., it has some measurable quantity), then it must carry information, even if it is otherwise static in all other regards.

Interestingly, if we accept the path-dependent evaluation of \sqrt{2^{I_0}}, and we believe that I_0 is the result of a physically meaningful definition of the logarithm, then this could provide a mathematical basis for non-determinism, in that physical systems governed by I_0 (which is presumably everything, if we accept that all extant objects carry information), allow for more than one solution, mechanically, in at least some cases. And even if it’s not true non-determinism, if we view 1 in the sense of being the minimum amount of energy possible, then I_0 is definitely a very small amount of information, which would create the appearance of non-determinism from our scale of observation, in that the order of interactions would change the outcomes drastically, from -1 to 1.

In closing, I’ll add that in the exponentiated form, |S| = |S_K| |S_U|, neither set can ever be empty, otherwise we have an empty set S, which makes no sense, because again, the set can’t change given our knowledge about the set. Once we have eliminated all impossible states, S_U will contain exactly one element, and S_K will contain all other elements, which is fine. The problem is therefore when we begin with no knowledge, in which case S_U = S, in the sense that all states are possible, and so our uncertainty is maximized, and our knowledge should be zero. However, if S_K is empty, then we have no definition of \log(|S_K|).

We instead assume that S_K begins non-empty, ex ante, in that it contains the cardinality of S, which must be known to us. Once our knowledge is complete, S_K will contain all impossible states of S, which will be exactly N - 1 in number, in addition to the cardinality of S, which was known ex ante, leaving one state in S_U, preserving the tautology of both I = K + U and |S| = |S_K| |S_U|.


Discover more from Information Overload

Subscribe to get the latest posts sent to your email.

Leave a comment