On Perfect Knowledge

My paper Information, Knowledge, and Uncertainty [1] implies the superficially awkward conclusion that a perfectly consistent set of observations, carries no Knowledge at all. This follows from the fundamental equation in [1], which assumes that Knowledge is the balance of Information less Uncertainty. Symbolically,

I = K + U,

which in turn implies that K = I - U. In the case of a single observation of a given system, Information is assumed to be given by the maximum entropy of the system given its states, and so a system with N possible states has an Information of \log(N). The Uncertainty is instead given by the entropy of the distribution of states, which could of course be less than the maximum entropy given by I = \log(N). If it turns out that U < I, then K > 0. All of this makes intuitive sense, since, e.g., a low entropy distribution carries very low Uncertainty, since it must have at least one high probability event, making the system at least somewhat predictable.

The strange case is a truly certain event, which would cause the entropy of the distribution to be zero. This in turn sets all measures to zero, implying zero Information, Knowledge, and Uncertainty. However, this makes sense if you accept Shannon’s measure of entropy, since a source with a single certain event requires zero bits to encode. Similarly, a source with a single certain event carries no Uncertainty, for exactly that reason. You could use this to argue that there’s a special case of the equation above that doesn’t really make any sense, but this is actually wrong. Specifically, you still have to have a system in the first instance, it’s just in a constant state. Such systems are physically real, albeit temporarily, e.g., a broken clock. Similarly, a source that generates only one signal still has to exist in the first instance. And as such, you have no Uncertainty with respect to something that is actually extant. In contrast, when you have no Uncertainty with respect to nothing, that’s not really notable in any meaningful or practical way. The conclusion being, that zero Knowledge coupled with zero Uncertainty, with respect to a real system, is physically meaningful, because it means that you know its state with absolute certainty. You have the maximum possible Knowledge, it just happens to be, that quantity is zero in the case of a static system.

At the risk of being overly philosophical, if we consider the set of all mathematical theorems, which must be infinite in number for the simple reason that trivial deductions are themselves theorems, then we find a fixed set, which is immutable. As a consequence, perfect Knowledge of that set would have a measure of zero bits. To make this more intuitive, consider the set of all mathematical statements, and assign each a truth value of either true or false. If you do not know the truth value of each statement, then you are considering what is from your perspective a dynamic system, which could change as information becomes available (e.g., you prove a statement false). If instead you do know the truth value of each statement, then it is a fixed system with zero Uncertainty, and therefore zero Knowledge.


Discover more from Information Overload

Subscribe to get the latest posts sent to your email.

Leave a comment