I noted in the past that a UTM plus a clock, or any other source of ostensibly random information, is ultimately equivalent to a UTM for the simple reason that any input can eventually be generated by simply iterating through all possible inputs to the machine in numerical order. As a consequence, any random input generated to a UTM will eventually be generated by a second UTM that simply generates all possible inputs, in some order.
However, random inputs can quickly approach a size where the amount of time required to generate them iteratively exceeds practical limitations. This is a problem in computer science generally, where the amount of time to solve problems exhaustively can even at times exceed the amount of time since the Big Bang. As a consequence, as a practical matter, a UTM plus a set of inputs, whether they’re random, or specialized to a particular problem, could in fact be superior to a UTM, since it would actually solve a given problem in some sensible amount of time, whereas a UTM without some specialized input would not. This suggests a practical hierarchy that subdivides finite time by what could be objective scales, e.g., the age of empires (about 1,000 years), the age of life (a few billion years), and the age of the Universe itself (i.e., time since the Big Bang). This is real, because it helps you think about what kinds of processes could be at work solving problems, and this plainly has implications in genetics, because again you’re dealing with molecules so large that even random sources don’t really make sense, suggesting yet another means of computation.
Discover more from Information Overload
Subscribe to get the latest posts sent to your email.