Time as a measurement, not a dimension

I came to the conclusion last night, that I may have wasted a lot of time, thinking about time as an actual dimension of space. In my defense, I’m certainly not the first physicist or philosopher to do so. Specifically, my entire paper, A Computational Mode of Time-Dilation [1], describes time as a measurement of physical change, not a dimension. Nonetheless, it produces the correct equations for time-dilation, without treating time as a dimension, though for convenience, in a few places, I do treat it as a dimension, since a debate on the corporeal nature of time is not the subject of the paper, and instead, the point is, you can have objective time, and still have time-dilation.

As a general matter, my view now is that reality is a three-dimensional canvas, that is updated according to the application of a rule, effectively creating a recursive function. See Section 1.4 of [1]. Because [1] is years old at this point, this is obviously not a “new” view, but one that I’ve returned to, after spending a lot of time thinking about time as an independent dimension, that could, e.g., store all possible states of the Universe. The quantum vacuum was one of the primary drivers for this view, specifically, that other realities temporarily cross over into ours, and because that’s presumably a random interaction, you should have a net-zero charge (i.e. equal representation from all charges), momentum, etc, on average, creating an otherwise invisible background to reality, save for extremely close inspection.

I don’t think I’m aware of any experiment that warrants such an exotic assumption, and I’m not even convinced the quantum vacuum is real. As such, I think it is instead rational to reject the idea of a space of time, until there is an experiment that, e.g., literally looks into the future, as opposed to predicting the future using computation.

I’ll concede the recursive function view of reality has some problems without time as a dimension, because it must be implemented in parallel, everywhere in space, otherwise, e.g., one system would update its states, whereas another wouldn’t, creating a single reality with multiple independent timelines. This is not true at our scale, and I don’t think there’s any experiment that shows it’s true at any scale. So if time doesn’t really exist as a dimension, we still need the notion of syncopation, which is in all fairness, typically rooted in time. But that doesn’t imply time is some form of memory of the past, or some projection of the future.

This is plainly an incomplete note, but the point is to reject the exotic assumptions that are floating around in modern physics, in favor of something that is far simpler, yet works. Reality as a recursive function makes perfect sense, taking the present moment, transforming it everywhere, producing the next moment, which will then be the present, with no record of the past, other than by inference from the present moment.

We’re still left with the peculiar fact that all of mathematics seems immutable (e.g., all theorems of combinatorics govern reality, in a manner that is more primary than physics, since they can never change or be wrong), but that doesn’t imply time is a dimension, and instead, my view, is that mathematics is beyond causation, and simply an aspect of the fabric of reality, whereas physics is a rule, that is applied to the substance contained in reality, specifically, energy. Physics doesn’t seem to change, but it could, in contrast, mathematics will never change, it’s just not possible.

Measuring Uncertainty in Ancestry

In my paper, A New Model of Computational Genomics [1], I presented an algorithm that can test whether one mtDNA genome is the common ancestor of two other mtDNA genomes. The basic theory underlying the algorithm is straightforward, and cannot be argued with:

Given genomes A, B, and C, if genome A is the ancestor of genomes B and C, then it must be the case that genomes A and B, and A and C, have more bases in common than genomes B and C. This is a relatively simple fact of mathematics, that you can find in [1], specifically, in footnote 16. However, you can appreciate the intuition right away: imagine two people tossing coins simultaneously, and writing down the outcomes. Whatever outcomes they have in common (e.g., both throwing heads), will be the result of chance. For the same reason, if you start with genome A, and you allow it to mutate over time, producing genomes B and C, whatever bases genomes B and C have in common will be the result of chance, and as such, they should both mutate away from genome A, rather than developing more bases in common with each other by chance. This will produce the inequalities |AB| > |BC| and |AC| > |BC|, where |AB| denotes the number of bases genomes A and B have in common.

For the same reason, if you count the number of matches between two populations at a fixed percentage of the genome, the match counts between populations A, B, and C, should satisfy the same inequalities, for the same reason. For example, fix the matching threshold to 30% of the full genome, and then count the number of genomes between populations A and B that are at least a 30% match or more to each other. Do the same for A and C, and B and C. However, you’ll have to normalize this to an [0,1] scale, otherwise your calculations will be skewed by population size. My software already does this, so there’s nothing to do on that front.

By iteratively applying the population-level test for different values of M, we can also generate a measure of uncertainty associated with our observation. That is, not only can we test whether the inequalities are satisfied, we can also generate a measure of uncertainty associated with the test.

Specifically, fix M to some minimum value, which we select as 30% of the full genome size N, given that 25% is the expected matching percentage produced by chance, and 30% is meaningfully far from chance (again, see Footnote 16 of [1]). Further, note that as M increases, our confidence that the matches between A and B and A and C, are not the result of chance, increases. For intuition, note that as we increase M, the set of matching genomes can only grow smaller. Similarly, our confidence that the non-matching genomes between B and C will not be the result of chance decreases as a function of M. For intuition, note that as we increase M, the set of non-matching genomes can only grow larger.

As a result the minimum value for which the inequalities are satisfied informs our confidence in the B to C test, and the maximum value of M for which the inequalities are satisfied informs our confidence in the A to B and A to C tests. Specifically, the probability the B to C test is the result of chance is informed by the difference between the minimum M – N25%, whereas the A to B and A to C tests are informed by the difference N – M, where M is the maximum M. Note this difference is literally some number of bases, that is in turn associated with a probability (see again, Footnote 16 in [1]), and a measure of Uncertainty (see Section 3.1 of [1]). This allows us to first test whether or not a given population is the common ancestor of two other populations, and then further, assign a value of Uncertainty to that test.

Phoenicians as Common Ancestor

In a previous article, I showed that the people of Cameroon test as the ancestors of Heidelbergensis, Neanderthals, and Denisovans, with respect to their mtDNA. The obvious question is, how is it that archaic humans are still alive today? The answer is that they’re probably not truly archaic humans, but that their mtDNA is truly archaic. This is possible for the simple reason that mtDNA is remarkably stable, and can last for thousands of years without changing much at all. However, there’s still the question of where modern humans come from, i.e., is there a group of people that test as the common ancestors of modern human populations. The answer is yes, and it’s the Phoenicians, in particular, a group of mtDNA genomes found in Puig des Molins. Astonishingly, the Phoenicians test as the common ancestor of the Pre-Roman Egyptians (perhaps not terribly astonishing), and the modern day Thai and Sri Lankans, the latter two being simply incredible, and perhaps requiring a reconsideration of purported history.

The overall test is straight forward, and cannot be argued with: Given genomes A, B, and C, if genome A is the ancestor of genomes B and C, then it must be the case that genomes A and B, and A and C, have more bases in common than genomes B and C. This is a relatively simple fact of mathematics, that you can find in my paper, A New Model of Computational Genomics [1], specifically, in footnote 16. However, you can appreciate the intuition right away: imagine two people tossing coins simultaneously, and writing down the outcomes. Whatever outcomes they have in common (e.g., both throwing heads), will be the result of chance. For the same reason, if you start with genome A, and you allow it to mutate over time, producing genomes B and C, whatever bases genomes B and C have in common will be the result of chance, and as such, they should both mutate away from genome A, rather than developing more bases in common with each other by chance. This will produce the inequalities |AB| > |BC| and |AC| > |BC|, where |AB| denotes the number of bases genomes A and B have in common.

For the same reason, if you count the number of matches between two populations at a fixed percentage of the genome, the match counts between populations A, B, and C, should satisfy the same inequalities, for the same reason. For example, fix the matching threshold to 30% of the full genome, and then count the number of genomes between populations A and B that are at least a 30% match or more to each other. Do the same for A and C, and B and C. However, you’ll have to normalize this to an [0,1] scale, otherwise your calculations will be skewed by population size. My software already does this, so there’s nothing to do on that front.

In this case, I’ve run several tests, all of which use the second population-level method described above. We begin by showing that the Phoenicians are the common ancestor of the modern day Sri Lankans and Sardinians. For this, set the minimum match count to 99.65% of the full genome size. This will produce a normalized score of 0.833 between the Phoenicians and Sri Lankans, and 0.800 between the Phoenicians and Sardinians. However, the score between the Sri Lankans and the Sardinians is 0.200, which plainly satisfies the inequality. This is consistent with the hypothesis that the Phoenician maternal line is the ancestor of both the modern day Sri Lankans and Sardinians. Setting the minimum match count to 88.01% of the genome, we find that the score between the Phoenicians and the Pre-Roman Egyptians is 0.500, and the score between the Phoenicians and the Sri Lankans is 1.000. The score between the Pre-Roman Egyptians and the Sri Lankans is instead 0.000, again satisfying the inequality. This is consistent with the hypothesis that the Phoenicians are the common ancestor of both the Pre-Roman Egyptians and the modern day Sri Lankans.

This seems peculiar, since the Phoenicians are Middle Eastern people, and the genomes in question are from Ibiza. However, the Phoenicians in particular were certainly sea-faring people, and moreover, civilization in the Middle East goes back to at least Ugarit, which could date as far back as 6,000 BC. Though not consistent with purported history, this at least leaves open the possibility that people from the Middle East traveled to South Asia. This might sound too ambitious for the time, but the Phoenicians made it to Ibiza from the Middle East, which is roughly the same distance as the Middle East to Sri Lanka, both of which are islands. Once you’re in South Asia, the rest of the region becomes accessible.

If this is true, then it shouldn’t be limited to Sri Lanka, and this is in fact the case. In particular, the Thai also test as the descendants of the Phoenicians, using the same analysis. Even more interesting, both the modern day Norwegians, Swedes, and Finns test as the descendants of the Thai, again using the same analysis. Putting it all together, it seems plausible that early Middle Eastern civilizations not only visited but settled South Asia, and that some of them came back, in particular to Egypt, and Scandinavia. This could explain why the Pre-Roman Egyptians are visibly Asian people, and further, why Thai-style architecture exists in early Scandinavia. Though the latter might sound totally implausible, it is important to note that some Thai and Norwegian people are nearly identical on the maternal line, with about 99.6% of the genome matching. Something has to explain that. Also note that the Sri Lankan maternal line was present throughout Europe around 33,000 BC. This suggests plainly that many Europeans, and the Classical World itself, descend from the Phoenicians. That somewhat remote populations also descend from them is not too surprising, in this context.

Further, there are alarming similarities between the Nordic religions and alphabet, and the Canaanite religions and alphabet, in particular, the gods El / Adon and Odin, with their sons, Baal and Baldur, respectively. Once you place greater emphasis on genetic history, over written history, this story sounds perfectly believable. Further still, if people migrated back from South Asia to the West, then this should again not be limited to Scandinavia, and this is in fact the case. Astonishingly, the Pre-Roman Egyptians test as the descendants of the Thai people, using the same analysis. Obviously the Pre-Roman Egyptians were not the first Africans, and in fact, everything suggests they’re South Asian, and for the same reason, none of this implies that modern day Scandinavians are the first Scandinavians, and instead, again, it looks like many Norwegians and Finns are instead, again, South Asian.

Finally, this is all consistent with the obvious fact that the most advanced civilizations in the world, i.e., the Classical World, are all proximate to the Middle East, suggesting that the genesis of true human intelligence, could have come from somewhere near Phoenicia.

On the improbability of reproductive selection for drastic evolution

Large leaps in evolution seem to require too much time to make sense. Consider the fact that about 500 bases separate human mtDNA from that of a gorilla or a chimp. That’s a small percentage of the approximately 16,000 bases that make up human mtDNA, but the number of sequences that are 500 bases in length is 4^ 500, which has approximately 300 digits. As a consequence, claiming that reproductive selection, i.e., the birth of some large number of children, that were then selected for fitness by their environment, is the driver of the change from ape to man, makes no sense, as there’s simply not enough time or offspring for that to be a credible theory, for even this small piece of its machinery, which is the evolution of mtDNA.

However, if we allow for evolution at the cellular level in the individual, over the lifetime of the individual, then it could explain how e.g., 500 extra bases end up added to the mtDNA of a gorilla, since there are trillions of cells in humans. That is, floating bases are added constantly, as insertions, in error, and when lethal, the cell in question dies off. However, if not lethal, and instead beneficial, this could occur throughout the body of the organism, causing the organism to evolve within its own lifetime, by e.g., changing its mtDNA through such a large scale, presumably beneficial insertion, like the one that divides apes from humanity.

This implies four corollaries:

1. It is far more likely that any such benefits will be passed on from the paternal line, since men constantly produce new semen. In contrast, women produce some fixed number of eggs by a particular age. As a result, men present more opportunities to pass down mutations of this type, if those mutations also impact their semen.

2. There must be some women who are capable of producing “new eggs” after a mutation, otherwise the mutation that caused gorilla mtDNA to evolve into human mtDNA, wouldn’t persist.

3. If you argue instead that such drastic mutations occur in the semen or the eggs, then you again have the problem of requiring too much time, since it would require a large number of offspring, that are then selected for lethal and non-lethal traits. This is the same argument we dismissed above. That is, the number of possible 500 base insertions is too large for this to be a credible theory. As a consequence, drastic mutations cannot be the result of reproductive selection, period, and require another explanation, for which cellular mutations within the individual seem a credible candidate.

4. If true, then it implies the astonishing possibility of evolution within the lifetime of an individual. This sounds far fetched, but cancer is a reality, and is a failure at the cellular level, that causes unchecked growth. The argument above implies something similar, but beneficial, that occurs during the lifetime of an individual, permeating its body, and thereby impacting its offspring.

Denisovan as Common Ancestor, Revisited

In a previous note, I showed that the Denisovans appear to be the common ancestor of both Heidelbergensis and Neanderthals, in turn implying that they are the first humans. Since writing that note, I’ve expanded the dataset significantly, and it now includes the people of Cameroon. I noticed a while back that the people of Cameroon are plainly of Denisovan ancestry. Because it’s commonly accepted that humanity originated in Africa, the Cameroon are therefore a decent candidate for being related to the first humans.

It turns out, when you test Cameroon mtDNA, it seems like they’re not only related to the first humans, they are in fact the first humans, and test as the ancestors of the Denisovans, Heidelbergensis, and the Neanderthals. You might ask how it’s possible that archaic humans survived this long. The answer is, mtDNA is remarkably stable, and so while the people of Cameroon are almost certainly not a perfect match to the first humans, it seems their mtDNA could be really close, since they predate all the major categories of archaic humans with respect to their mtDNA.

The overall test is straight forward, and cannot be argued with: Given genomes A,B, and C, if genome A is the ancestor of genomes B and C, then it must be the case that genomes A and B, and A and C, have more bases in common than genomes B and C. This is a relatively simple fact of mathematics, that you can find in my paper, A New Model of Computational Genomics [1], specifically, in footnote 16. However, you can appreciate the intuition right away: imagine two people tossing coins simultaneously, and writing down the outcomes. Whatever outcomes they have in common (e.g., both throwing heads), will be the result of chance. For the same reason, if you start with genome A, and you allow it to mutate over time, producing genomes B and C, whatever bases genomes B and C have in common will be the result of chance, and as such, they should both mutate away from genome A, rather than developing more bases in common with each other by chance. This will produce the inequalities |AB| > |BC| and |AC| > |BC|, where |AB| denotes the number of bases genomes A and B have in common.

For the same reason, if you count the number of matches between two populations at a fixed percentage of the genome, the match counts between populations A, B, and C, should satisfy the same inequalities, for the same reason. For example, fix the matching threshold to 30%, and then count the number of genomes between populations A and B that are at least a 30% match or more to each other. Do the same for A and C, and B and C. However, you’ll have to normalize this to an [0,1] scale, otherwise your calculations will be skewed by population size. My software already does this, so there’s nothing to do on that front.

If it is the case that populations B and C evolved from population A, then the number of matches between A and B and A and C, should exceed the number of matches between B and C. The mathematics is not as obvious in this case, since you’re counting matching genomes, rather than matching bases, but the intuition is the same. Just imagine beginning with population A, and replicating it in populations B and C. In this initial state, the number of matching genomes between A and B, A and C, and B and C, are equal, since they’ve yet to mutate away from A (i.e., they are all literally the same population). As populations B and C mutate, the number of matching genomes between B and C should only go down as a function of time, since the contrary would require an increase in the number of matching bases between the various genomes, which is not possible at any appreciable scale. Again, see [1] for details.

In the first note linked to above, I show that the Denisovans are arguably the common ancestors of both Heidelbergensis and the Neanderthals. However, if you use the same code to test the Cameroon, you’ll find that they test as the common ancestor of the Denisovans, Heidelbergensis, and the Neanderthals. This is just not true of other populations that are related to Denisovans. For example, I tested the Kenyans, the Finns, and the Mongolians, all of which have living Denisovans in their populations (at least with respect to their mtDNA) and they all fail the inequalities. Now, there could be some other group of people that are even more archaic than the Cameroon, but the bottom line is, this result is perfectly consistent with the notion that humans originated in Africa, migrated to Asia, and then came back to both Europe and Africa, since e.g., about 10% of Kenyans are a 99% match to South Koreans and Hawaiians, and the Pre-Roman Ancient Egyptians were visibly Asian people, and about 40% of South Koreans are a 99% match to the Pre-Roman Ancient Egyptians.

The updated dataset that includes the Cameroons, and others, is available here. You’ll have to update the command line code in [1] to include the additional ethnicities, but it’s a simple copy / paste exercise, which you’ll have to do anyway to change the directories to match where you save the data on your machine.

Plastic as Optical Storage

Earlier today the sunlight hit the top of my stapler, and I noticed the reflection on the wall did not match the surface of the stapler at all. Once I finished work, I took a look, to find that there was no obvious explanation for the pattern it was producing on the wall (photos below). I then tested it using my flashlight, which produced the same result. I did the same thing to the bottom of the stapler, which again produced an inexplicable pattern (photos also below). I took the stapler apart, to find that what I was seeing was the internal plastic structure –

The obvious answer is that it’s reflecting more at the points where there’s more plastic, just like a wall that’s harder at certain points will cause a ball to bounce back harder at those points. That is, light is kinetic energy, and when it hits a surface, the amount of material it passes through will, among other things, impact the percentage of light that reflects back, rather than being absorbed or passing through it altogether (producing in that case transparency).

The stapler is truly opaque, and so e.g., placing a finger below the plastic does not change the reflected pattern. Instead, it must be that the marginal difference in luminosity in the reflection really is due to the amount of plastic at a given point on the stapler, with the most dense points producing more reflection, and therefore greater projected luminosity. I did the same thing to the bottom of the stapler, which produced another inexplicable pattern that was plainly the result of the amount of plastic below a given point.

This can obviously be used as optical storage, that’s read with an ordinary light. The advantage is that reading with a light, does not require moving the object (c.f., a compact disc), and therefore reading should be basically non-destructive, even over extremely long periods of time. Moreover, plastic is really cheap, and this is not the nicest stapler, so I doubt the substance in question is worth very much. Storing an appreciable amount of information will require either a lot of plastic, or a small scale of production, neither of which should be a problem or expensive. And again, this could create basically permanent storage if it’s otherwise cared for and simply read. This is not the new USB drive, but it could be a new means of storing data permanently (e.g., government records).

Two Notes on Economics

Inflation

It dawned on me the other day that you might be able to completely eliminate inflation, given adequate supply. Just imagine an auction, where bidders submit bids to an order book, and the order book is processed by the seller in a manner that maximizes the seller’s revenues. This is probably what’s going to happen with any competent seller, since they will generally seek to maximize revenues.

Now imagine that everyone is aware of the seller’s reservation price, the supply being sold, and the demand. If we assume that supply is adequate to satisfy demand, there is no incentive on the part of any bidder, to bid above the reservation price. Now assume that we give every bidder more money, effectively increasing the money supply of the auction. It doesn’t matter, there’s still no incentive to compete, because everyone knows that there’s adequate supply.

This suggests that economies could benefit radically through regulation that requires transparency with respect to reservation prices, supply, and demand, and moreover, imposes an order book concept, like you find in financial market exchanges. This is not hard to implement, and it’s at least worth experimenting with, to see if e.g., sector-based inflation can be controlled.

Yield on Consumption

Preferences have long been considered by economists, and I don’t know who considered the question first. However, I do know that prior to Von Neumann, only ordinal preferences were well understood. An example of an ordinal preference is that you prefer apples to bananas. However, this is not a quantitative relationship, and instead places apples higher in an ordinal queue, with respect to bananas.

Genius that he was, from what I understand, Von Neumann flippantly solved the problem of cardinal preferences using lotteries. Specifically, he attached probabilities to outcomes, and then asked for the indifference point. So as an example, you could be indifferent to getting an apple with a probability of .2 (and getting nothing at all with a probability of .8), and a banana with a probability of .8 (and nothing with a probability of .2). If that’s true, then you view both lotteries (apple lottery and banana lottery) as equivalent, suggesting that you’re willing to take the significant chance that you get nothing in the case of an apple, because you like apples so much. You could argue that you like apples 4 times more than bananas. As a result, Von Neumann’s use of lotteries allows us to at least think objectively about cardinal preferences.

However, just because people assign such great value to e.g., apples versus bananas, doesn’t mean that they’ve solved a problem in economics. For example, just ask an alcoholic, whether they’d rather have a beer versus dinner, and you’d be shocked to find that at least some would go for the beer. In more precise terms, even if we can measure cardinal preferences, it doesn’t imply that human beings have tuned them to any reasonable metric. The question is then, can we measure the “correctness”, for lack of a better word, of the consumption behaviors of individuals. I believe the answer is yes.

Specifically, consumption will impact GDP. As a consequence, we could in a laboratory environment substitute one consumer good with another, throughout an entire economy, and measure the impact on GDP over time, versus a control economy that did not make use of the substitution. Though this is difficult to do in practice, we can nonetheless construct obvious thought experiments. For example, imagine instead of paying for internet service, everyone in the United States decides to eat $150 worth of candy every month, forgoing internet service. This will with near certainty have a negative impact on GDP, and eventually, a negative impact on health, which will again, have a negative impact on GDP. This simple experiment implies that the yield on consuming internet services is significantly higher than the yield on consuming candy.

This is deliberately a bit comical, but it’s an extremely serious idea, that would allow for objective preferences, that are simply correct, in the context of the returns they generate. In this view, societies that have objectively better preferences, will have higher GDP’s per capita.

Assertion and Exclusion

It dawned on me last night, that it’s difficult to do worse than chance, when predicting a random variable. Specifically, if the lowest probability event in a distribution is e.g., \frac{1}{3}, it’s not clear how you can achieve an accuracy that is lower than \frac{1}{3}, but I’m not an expert on probability, so there could be a known result on point. I noticed this because one of my algorithms routinely performs worse than chance, and I never cared, because it uses confidence filtering to produce accuracy that is on par with other machine learning algorithms. See Information, Knowledge, and Uncertainty [1], generally. But it’s actually astonishing upon reflection. For example, try to underperform 50% accuracy using a fair coin. I don’t think it’s possible over a long run, though again there could be a result on point. In fact, if you can underperform chance, it suggests that you have information about the system. For example, if I know the next outcome will be heads, I could deliberately select tails. This will eventually cause my accuracy to be zero.

The net point being, that if your accuracy is under 50%, you actually know that your answer is probably wrong, which is actually positive information regarding whatever system you’re attempting to predict. Specifically, if your accuracy is less than 50%, then it’s more likely that the correct answer is in the compliment of your prediction (i.e., the set that contains every outcome other than your prediction). This sounds trivial, but it’s not, because it’s a different kind of information. When accuracy is above 50%, the most likely answer is a set with one element, asserting a possibility. When accuracy is below 50%, the most likely answer is the compliment of a set, excluding a possibility. In the extreme case, when your accuracy is significantly below chance, this allows you to exclude a possibility with high confidence, which is often useful. That said, it’s not clear how this could happen, but again, the algorithm I describe in [1], does exactly that, routinely.

One possibility, is that there really is a fundamental principal that accuracy increases as a function of the information upon which a prediction is based, and if you can reduce the information, or increase the information, you can achieve arbitrarily low or high accuracy, regardless of what you’re trying to predict. This is consistent with another astonishing observation I made, that you can actually predict random variables using the techniques I outline in [1], with high accuracy, which should otherwise be impossible. This does not contradict the laws of probability at all, since it’s only a subset of the output of a random source that can be predicted, but it’s nonetheless a counterintuitive, and potentially profound result.

Practical Infinity and Prediction

I’ve thought a lot about the physical meaning of infinity, but it just dawned on me that a simple thought experiment suggests that infinity could be physically meaningful, in the simple sense of unbounded quantity: just consider the unbounded expansion of time. There’s no reason to assume that the Universe will ever vanish, even if all the mass and energy within the Universe does vanish. This posits a vacuum that exists, and persists through time, implying time is infinite, even if there’s nothing happening in that vacuum (i.e., it’s just an empty space that nonetheless exists).

Let’s dispense with that notion, and consider instead the mass and energy in the Universe itself. Fields can effectively destroy energy – just imagine light escaping from a massive object, it could red shift into literally nothing, if energy is quantized. However, fields cannot (to my knowledge) destroy mass. As a consequence, if all mass and energy were to vanish, it would have to be the case that all mass is converted to energy, and then destroyed via gravity. However, you can’t have gravity without mass, which implies that it is literally impossible for all of the energy in the Universe to be destroyed, since even in this view, you’d always have at least some mass. Therefore, time must be infinite, absent other assumptions.

If time is infinite, then it suggests the more general possibility of unbounded quantities, in particular, arbitrarily precise measurements. You don’t need the set of real numbers to have problems with predictions, you just need arbitrarily precise measurements, which would always imply non-zero error (i.e., the unexplored portion could always contain information). If the consequences of the error are meaningful, your predictions will be meaningfully wrong. Because humans have finite lives, if arbitrary precision is real, then it necessarily implies all predictions carry uncertainty.

Modeling Credit Using ML

My AutoML software, Black Tree AutoML, can already predict credit outcomes with no specialization at all. But it just dawned on me that with a bit of work, you can use any clustering and classification system to model credit in a meaningful way. First let’s define the relevant properties of a credit, which are its assets, and its liabilities, and for simplicity, we’ll include the equity capital of the credit in its liabilities. This will allow us to express a credit, at a given moment in time t, as a vector h(t) = (a_1, \dots, a_k;l_1, \ldots, l_m), where each a_i is the value of assets of type i owned by the credit, and each l_i is the value of liabilities of type i owed by the credit. Because this is so abstract, this allows you to consider not only corporates, but SPV’s as well, and individuals.

Now let’s posit a dataset of credits S = \{h_1, \ldots, h_M\}, that were sampled over time. That is, h_i is actually a time-series of a given credit, and we can evaluate h_i(t), for any t within some ordinal interval, though you could also consider specific periods of time as well. The overall gist being, we have observed and recorded the state of a given credit over time, in the form of the vector h(t) = (a_1, \dots, a_k;l_1, \ldots, l_m). We can therefore, pull all credits that are sufficiently similar to some new input credit h(t), which will produce a cluster of similar credits. Because our dataset contains time-series data for each of the credits returned in the cluster, we can form possible future paths for h(t). This will allow us to say, as a general matter, what the future of h will look like, given its present state h(t). Moreover, we can easily construct a probability of default, again using the cluster, since all of the credits in the cluster either paid or didn’t, though you could have some unknowns as well as a practical matter (i.e., those credits are still outstanding).

Applying this process repeatedly to the initial state of some credit h(t_0), we will construct a set of possible future paths for h(t_0), given that initial state. Specifically, first we find the cluster associated with h(t_0). Then, we find the next state for each credit in that cluster. So, e.g., if credit x(t_j) is in the cluster associated with h(t_0), we find the next state of x(t_j) in the cluster, which we can represent as x(t_{j+1}). We do this again, for all such x, and continue as desired, and this will produce a dataset of possible future paths for the credit, which will grow exponentially as a function of time, and at each ordinal interval of time, there will be some probability of default based upon the dataset.

My AutoML software is typically really accurate, so I would wager that if you use my software for the clustering step, you’re going to get great answers, and probably make a lot of money as a consequence, and so it’s another great reason to buy my software, which is comically better than everyone else’s.