Follow up on Roman mtDNA

I noted previously that literally no one was a perfect match for Ancient Roman mtDNA, other than other Ancient Romans. The dataset I used at the time had about 400 global genomes, so this is already surprising, and indicative of a people that were systematically annihilated, as opposed to a society that collapsed. For contrast, plenty of people today globally are perfect matches for the Ancient Egyptians, who lived 4,000 years ago, roughly 2,000 years before the Ancient Roman genome samples. As such, it’s not as if people simply disappear, even if their civilizations collapse, it’s just not true.

Just out of curiosity, I ran a BLAST Search on this complete Ancient Roman mtDNA genome. There are zero perfect matches outside of other Ancient Roman mtDNA genomes. This proves conclusively that the Ancient Romans were literally exterminated, which must have taken centuries, possibly longer. This in turn implies that their extermination was deliberate. As a consequence of their annihilation, there are basically no people related to the first Christians alive today. Have a look around the world, and ask yourself whether or not religious people are in trouble again. This is despite the popular narrative the media presents, which is that religious people are belligerent –

Just ask the millions of Muslims held in cages by the PRC (an explicitly atheist regime), who’s really under threat.

As a general matter, the world population seems to be divided into three groups, one descended from the Denisovans (some Finns and some Ashkenazi Jews, with small pockets everywhere), one descended from Heidelbergensis (Iberian Roma, Mongolians, Papuans, and Andaman Indians, again with small pockets everywhere), and apparently everyone else, which includes a giant population that spans the entire world from the Greenland to Hawaii, moving East.

Now consider how many people do research in genetics, and no one ever mentions this glaring, obvious fact. It is simply not normal for an entire civilization’s bloodline to vanish. Egypt was much smaller than Ancient Rome, and yet, there are no Romans left, none, in a dataset that, from what I understand, contains about 100,000 genomes. Keep in mind, mtDNA barely changes at all, even over enormous periods of time, which is why you find plenty of matches to the Ancient Egyptians, Phoenicians, Mayans, Chachapoyas, and others. Literally perfect matches, to truly ancient civilizations, all over the world, in modern populations. Not a single Roman left, none, nowhere in the world, despite the fact it was an empire that spanned continents. They were plainly exterminated, there’s no argument to the contrary, and I’d wager some of the same people are planning to exterminate the Uyghurs, and probably others right now. This obviously does not imply that the PRC is responsible for the annihilation of the Ancient Romans. I would wager instead that the Catholic Church took care of that, but you never know. I am instead suggesting that some people instinctively hate religious people, and that it is probably genetic.

Transactions, Savings, and Income

Posit two economies A and B that begin with exactly the same structure, specifically, the same distribution of preferences, cash, and other assets among the individuals in the population. So for an individual A_i, there is some other individual B_i, with exactly the same preferences, cash supply, and other assets. As a consequence, A_i and B_i are identical for economic purposes. Therefore, if we begin at time t_1, where economies A and B are equal, and assume deterministic progression, economies A and B will proceed through exactly the same sets of transactions over time.

Now assume that the pace at which this happens as a function of time for economy A is much faster than that of economy B. It follows that the income generated during any period of time by economy A will be greater than that of economy B. That is, the GDP of economy A categorically exceeds that of economy B, simply because it progresses faster through what is nonetheless an identical sequence of states and transactions. All other things being equal, it follows that economies that have a higher rate of transaction over time will have higher GDP, than those that have a lower rate of transaction over time.

Now posit that A and B are identical, except that in economy B, savings are held in real assets (e.g., land), whereas in economy A, savings are held in financial assets (e.g., bank deposits). Assume again they begin completely identical, and so it must be the case that there are no savings outside of cash in both economies. Again consider the economies as a function of time. All of economy B‘s savings will go to real assets, and therefore, the cash associated with those purchases simply moves on to the seller, in exchange for the asset. This produces no net change in GDP, but it can improve utility, assuming voluntary transactions (i.e., the seller wants cash, the buyer wants an asset). In contrast, in economy A, savings will go to financial assets (including e.g., bank deposits).

If the bank deposits are backed by fractional reserves, then every dollar deposited will in turn generate a multiple of the number of dollars actually deposited. So in that case, the money supply increases, which should increase GDP as that new money is paid out into the economy. If instead the cash goes to equity in a company, then the company is seeking to raise capital for investment. It follows that new income generating assets will be produced by the company (assuming it is successful) using that capital, thereby increasing GDP. In contrast, simply purchasing an existing asset for cash does not produce any new income, even if that asset already generates income (e.g., through rents on land). It follows that, all other things being equal, economies that have savings in financial assets will have higher GDP than economies that have savings in real assets.

Abstracting, we see that if savings are deposited in banks with fractional reserves, that then lend in the form of debt, the money supply increases. If instead, savings are contributed to companies in exchange for equity, then the supply of income producing assets increases. In both cases, GDP should increase. In contrast, the purchase of an existing real asset cannot increase GDP. This could explain the wealth disparities between economies with comparable populations, specifically, the prevalence of financial assets. Moreover, financial assets require law and order, whereas real assets can be defended by individuals. It follows that a reliable legal system is required in order to maximize the GDP of an economy. Therefore, we should find that countries with more reliable legal systems are generally wealthier than those with less reliable legal systems.

Generating Gravity

All of this talk of UFOs lately led me to an article on previously secret Navy tech that is being touted as a likely source for the truly inexplicable UFO sightings that Navy pilots have been making. It looks like these more recent sightings are not the real thing, otherwise we wouldn’t be shooting them down so easily, but the article mentions a patent for a device capable of generating a “gravitational wave”. I obviously have no idea what the device actually does, but my model of gravity allows for gravity to be generated, at least in theory, because I posit a cause of gravity, in a mechanical sense, that I suppose in theory could be fabricated.

It dawned on me, that if you can actually generate gravity, then you probably don’t need electricity, at least not to generate locomotion, since you can simply accelerate a mass using gravity directly. Moreover, you can generate visible light using ambient low frequency light, blue-shift it into the visible spectrum, and do so using a heterogeneous set of frequencies, which will create white light. The net point being, that the ability to generate gravity could completely liberate humanity economically from the shackles of limited energy.

The Equity Value of a Contract

It just dawned on me, that every contract should have a net value, and therefore, some equity value. Specifically, posit a contract between parties A and B. Even if there are no payments or other financial deliveries, if the contract is economically meaningful, it will provide for rights and obligations. If the contract is voluntary, then the value of the contract to each party should be greater than zero on day one, otherwise they wouldn’t have entered into it. This is already a deep fact of economics, since it necessarily implies value creation. That is, the parties are by definition better off than they were without the contract. And this is something I discuss at great length in my book VeGA, which is that crime is literally a net economic loser for society, since it undermines voluntary transactions, thereby creating suboptimal outcomes, and probably outright losses.

There is however a separate point, that can be thought of as a function of secondary markets. Specifically, it is possible for either of A or B to assign their rights in the contract, or have someone else assume their obligations. As a consequence, there should be a market price for all four of those components. Specifically, there should be a market price for (i) the rights of A and B and (ii) the obligations of A and B. As a practical matter, secondary markets exist only for financial contracts, but there’s no reason why such a secondary market should be limited to financial contracts. As a general matter, if party A is e.g., unable to perform its obligations, there should be a market where A can offload its obligations. This is a generalized version of a futures contract.

This could be achieved through standardization of service contracts, and a legal system that allows people to substitute fungible services. Block Chain platforms are probably a decent candidate. There are of course cases where you would not want to allow substitution in this manner. For example, if you’re buying a bespoke suit, or other product of fine art or artisanship, you simply don’t want a substitute. If however, you’re having your apartment painted, and two vendors are certified as fungible, it’s at least possible that the superficial uncertainty of having an unknown third-party paint your home, would be offset by a robust secondary market, that would potentially create superior pricing.

You could even allow for speculators in these types of markets, if you e.g., have cash penalties for failure to deliver services. Just imagine having a contract to have your house painted, and that contract was issued by a speculator that has no ability to actually paint your house, and instead assumed they would be able to offload the obligation to some certified third party painter. If they fail to find a painter in time, they’re hit with a cash fine that is paid to you, and is adequate to make up for the inconvenience. Will everyone want to participate in such a market? Maybe not, but you can already see that it will create competition on price, because of speculation. At the same time, speculation can cause all kinds of other problems. The net point being, that careful consideration of the economy could identify potentially useful applications of this type of generalized futures contract. If I had to bet, I would say food delivery services (including commercial scale production), home appliance delivery, and other already-fungible goods and services will probably work.

Corporate events alone would probably create a market in big cities, since you don’t care about the particular foods served, you care about the delivery date, the quality, and the number of people it can feed. This will allow for “cheapest to deliver” concepts, which you find in fixed income markets, which will certainly create opportunities for speculators to make money. In the worst case, you have to “fail to deliver”, the other side of the contract again gets charged some penalty rate, which is paid to you, which in a big city, will allow you to order food for delivery right away.

Runtime Complexity in Bits

It just dawned on me, we can express an analog of the Kolmogorov Complexity that measures the runtime complexity of an algorithm. Specifically, let F_1 and F_2 be equivalent functions run on the same UTM U, in that U(F_1(x)) = U(F_2(x)), for all x. During the operation of the functions, the tape of the UTM will change. Simply count the number of changes to the tape, which has units bits, which will allow us to compare the runtimes of the functions in the same units as the Kolmogorov Complexity. As a general matter, we can define a measure of runtime complexity, R_K(F(x)), given by the number of bits changed during the runtime of F as applied to x.

Interestingly, my model of physics implies an equivalence between energy and information, and so a change in the information content of a system must be the result of acceleration. See Equation 10 of, A Computational Model of Time-Dilation. This connects computation with forces applied to systems, which are obviously related, since you need to apply a force to change the state of a tape in a UTM. Note there’s a bit of pedantry to this, since a changed bit is not necessarily the same as the units of bits themselves, but in any case, it nonetheless measures runtime complexity in some form of bits.

Measuring the Information Content of a Function

It just dawned on me that my paper, Information, Knowledge, and Uncertainty [1], seems to allow us to measure the amount of information a predictive function provides about a variable. Specifically, assume F: \mathbb{R}^K \rightarrow S \subset \mathbb{R}. Quantize S so that it creates M uniform intervals. It follows any sequence of N predictions can produce any one of M^N possible outcomes. Now assume that the predictions generated by F produce exactly one error out of N predictions. Because this system is perfect but for one prediction, there is only one unknown prediction, and it can be in any one of M states (i.e., all other predictions are fixed as correct). Therefore,

U = \log(M).

As a general matter, our Knowledge, given E errors over N predictions, is given by,

K = I - U = (N - E) \log(M).

If we treat \log(M) as a constant, and ignore it, we arrive at N - E. This is simply the equation for accuracy, multiplied by the number of predictions. However, the number of predictions is relevant, since a small number of predictions doesn’t really tell you much. As a consequence, this is an arguably superior measure of accuracy, that is rooted in information theory. For the same reasons, it captures the intuitive connection between ordinary accuracy and uncertainty.

Paternal Lineage and mtDNA

In my paper, A New Model of Computational Genomics [1], I introduced an embedding from mtDNA genomes to Euclidean space, that allows for the prediction of ethnicity, using mtDNA alone (see Section 5). The raw accuracy is 79.00%, without any filtering for confidence, over a dataset of 36 global ethnicities. Chance implies an accuracy of about 3.0%. Because ethnicity is obviously a combination of both maternal and paternal DNA, it must be the case that mtDNA carries information about the paternal lineage of an individual. Exactly how this happens is not clear from this result alone, but you cannot argue with the result overall, which plainly implies that mtDNA, which is inherited only from the mother, carries information about paternal lineage generally. This does not mean you can say who your father is using mtDNA, but it does mean that you can predict your ethnicity with very good accuracy, using only your mtDNA, which in turn implies your paternal ethnicity.

One possible hypothesis is that paternal lines actually do impact mtDNA, through the process of DNA replication. Specifically, even though mtDNA is inherited directly from your mother, it still has to be replicated in your body, trillions of times. As a consequence, the genetic machinery, which I believe to be inherited from both parents, could produce common mutations due to paternal lineage. I can’t know that this is true, but it’s not absurd, as something has to explain these results.

Finally, note that this also implies that the clusters generated using mtDNA alone are therefore also indicative of paternal lineage. (See Section 4 of [1]). To test this, I wrote an analogous algorithm, that uses clusters to predict ethnicity. Specifically, the algorithm begins by building a cluster for each genome in the dataset, which includes only those genomes that are a 99% match to the genome in question (i.e., counting matching bases and dividing by genome size). The algorithm then builds a distribution of ethnicities for each such cluster (e.g., the cluster for row 1 includes 5 German genomes and 3 Italian genomes). Because there are now 411 genomes, and 44 ethnicities, in the updated dataset, this corresponds to a matrix that has 411 rows and 44 columns, each of which contains an integer, that indicates the number of genomes from a given population included in the applicable cluster. I then did exactly what I described in Section 4 of [1], which is to compare each distribution to every other, by population, ultimately building a dataset of profiles (the particulars are in [1]). The accuracy is 77.62%, which is about the same as using the actual genomes themselves. This shows that the clusters associated with a given genome contain information about the ethnicity of an individual, and therefore, the paternal lineage of that individual.

All of this implies that many people have deeply mixed heritage, in particular Northern Europeans, ironically touted as a “pure race” by racist people that apparently didn’t study very much of anything, including their languages, which on their own, suggest a mixed heritage. One sensible hypothesis is that the clusters themselves are indicative of the distribution of both maternal and paternal lines in a population. You can’t know this is the case, but it’s consistent with this evidence, and if it is the case, racist people are basically a joke. Moreover, I’m not aware of any accepted history that explains this diversity, and my personal guess (based upon basically intuition and not much else), is that there was a very early period of sea-faring, global people, prior to written history.

Attached is the code for the analogous clustering algorithm. The rest of the code and the dataset is linked to in [1].

https://www.dropbox.com/s/bmybgpxl1s5e1aq/Cluster-Based_Profile_CMNDLINE.m?dl=0

Using Local Charges to Filter Chemicals

I noted a while back that it looks like the proteins found in cells make use of lock-and-key systems based upon local electrostatic charge. Specifically, two proteins will or won’t bond depending upon the local charges at their ends, and will or won’t permeate a medium based upon their local charges generally (versus their net overall charge). While watching DW News, specifically a report on filtering emissions from concrete production, it dawned on me the same principles could be used to filter chemicals during any process, because all sufficiently large molecules will have local charges that could differ from the net charge of the molecule as a whole. For example, water is partially, locally charged at points, because it has two hydrogen atoms and an oxygen atom, though the proteins produced by DNA of course have much more complex local charges.

The idea is, you have a mesh that is capable of changing its charge locally at a very small scale, and that mesh is controlled by a machine learning system that tunes itself to the chemicals at hand. You can run an optimizer to find the charge configuration that best filters those chemicals. This would cause the mesh to behave like the mediums you find in cells that are permeable by some molecules and not others, by simply generating an electrostatic structure that does exactly that.

Is this a trivial matter of engineering? No, of course not, because you need charges at the size of a few molecules, but the idea makes perfect sense, and can probably be implemented, because we already produce semiconductors that have components at the same scale. This would presumably require both positive and negative charges, so it’s not the same as a typical semiconductor, but it’s not an impossible ask on its surface. If it works, it would produce a generalized method for capturing hazardous substances. It might not be too expensive, because, e.g., semiconductors are cheap, but whatever the price, it’s cheaper than annihilation, which is now in the cards, because you’re all losers.