I’ve noted before that proofs start to fail when you have an infinite number of summands, e.g., in the case of the sum over all Fibonacci numbers, which produces -1. This is obviously wrong, and implies that algebra fails given an infinite number of terms. This suggests that there is a logical independence between limits, and the infinite case, proper.
That is, e.g., the sum, , does not imply that the number 1 is in fact the sum over the infinite set of such terms. You can easily construct a proof by induction that you can always find a value of
that will cause the sum to be arbitrarily close to 1. As a consequence, the sum at infinity cannot be less than 1. An intuitive proof would simply note that all terms present in each finite sum, must be present in the infinite sum, and since the finite sums get arbitrarily close to 1, the infinite sum cannot be less than 1. More formally, assume to the contrary that the sum in the infinite case is
. Since there is always a value of
that will cause the sum to be arbitrarily close to 1, we can therefore always find a value of
that will cause the sum to exceed
. Therefore, the sum is not less than 1 in the infinite case. The sum can however be equal to 1, without contradicting such a proof by induction (though it could contradict other fundamental assumptions beyond the scope of this discussion), since that proof requires only that all finite sums are less than, yet arbitrarily close to 1. Interestingly, there is similar logical independence in the case that the sum exceeds 1, since that again, does not contradict the proof by induction that the sum gets arbitrarily close to 1 in all finite cases.
I don’t think this is academic, and instead, I think it’s a potentially deep point about algebra in the infinite case, and non-Turing computing, since the sum in the infinite case is not computable, since all computable functions make use of a finite number of operations. Specifically, if infinite systems really exist in Nature, then there could be a correct answer to whether the sum is actually 1, or greater than 1. This reminds us that all of mathematics is ultimately rooted in reality itself, and if our assumptions are wrong, then our theorems will be physically meaningless. For combinatorics (e.g., graph theory, counting problems, etc.), it’s simply not credible to doubt the assumptions, since they’re plainly physically true. But when you get into this kind of mathematics, it’s not obvious what the right answer is.
Discover more from Information Overload
Subscribe to get the latest posts sent to your email.