A Fundamental Problem with the Double-Slit Experiment

I noticed a while back that the purported Quantum Mechanical explanations for the Double-Slit Experiment, don’t make any sense for a very simple reason: generally, people assume the velocity of light is fixed at c. In contrast, if we assume self-interference for a photon, then the photon must (i) change direction or (ii) change velocity, or both, otherwise, there won’t be any measurable effects of self-interference, barring even more exotic assumptions.

A diagram of the Double-Slit Experiment. Image courtesy of Wikipedia.

We can rule out case (ii), since photons are generally assumed to travel at a constant velocity of c in a vacuum. However, case (i) produces exactly the same problem, since if the photon changes direction due to self-interference, then the arrival time against the screen will imply a velocity of less than c, since by definition, the photon will not have followed a straight path from the gun to the screen. This is the end of the story, and I cannot believe that no one has pointed this out before.

We can instead posit a simple explanation for the purported interference pattern: the photons scatter inside the slit (i.e., bounce around), which from our perspective is flat, but from the perspective of a photon, is instead a tunnel, since the photon (or other elementary particle) is so small. This will produce two distributions of scattering angles (one for each slit), which will overlap at certain places along the screen more than others, producing a distribution of impact points that would otherwise look like a wave interference pattern.

This is much simpler isn’t it, and clearly a better explanation than some exotic nonsense about time and other realities. That’s not how you do science. Now, it could be that there is some experiment that requires such exotic theories, but I don’t know of one, not even the Quantum Eraser Experiment. All of these experiments have simple explanations, and we’ve developed a horrible, unscientific habit, of embracing exotic explanations for basic phenomena, that is at this point suspicious, given the military and economic value of science.

Solution to the Liar’s Paradox

I don’t remember the first time I heard about the Liar’s Paradox, but it was definitely in college, because it came up in my computer theory classes in discussions on formal grammars. As such, I’ve been thinking about it on and off, for about 20 years. Wikipedia says that the first correct articulation of the Liar’s Paradox is attributed to a Greek philosopher named Eubulides, who stated it as follows: “A man says that he is lying. Is what he says true or false?” For purposes of this note, I’m going to distill that to the modern formulation of, “This statement is false.”

As an initial matter, we must accept that not all statements are capable of meaningful truth values. For example, “Shoe”. This is just a word, that does not carry any intrinsic truth value, nor is there any meaningful mechanical process that I can apply to the statement to produce a truth value. Contrast this with, “A AND B”, where we know that A = “true” and B = “false”, given the typical boolean “AND” operator. There is in this case a mechanical process that can be applied to the statement, producing the output “false”. Now, all of that said, there is nothing preventing us from concocting a lookup table where, e.g., the statement “Shoe” is assigned the value “true”.

Now consider the distilled Liar’s Paradox again: “This statement is false”. There is no general, mechanical process that will evaluate such a statement. However, it is plainly capable of producing a truth value, since it simply asserts one for itself, much like a lookup table. Typically, this is introduced as producing a paradox, because if we assume the truth value is false, then the truth value of false is consistent with the truth value asserted in the statement. Generally speaking, when assertion and observation are consistent, we say the assertion is true, and this is an instance of that. As such, the statement is true, despite the fact that the statement itself asserts that it is false. Hence, the famous paradox.

Now instead approach the problem from the perspective of solving for the truth value, rather than asserting the truth value. This would look something like, “This statement is A”, where A \in \{true, false\}. Now we can consider the two possible values of A. If A = true, then the statement asserts a truth value that is consistent with the assumed truth value, and there’s nothing wrong with that. If instead A = false, then we have a contradiction, as noted above. Typically, when engaging in mathematics, contradictions are used to rule out possibilities. Applying that principle in this case yields the result that A = true, which resolves the paradox.

In summary, not all statements are capable of mechanical evaluation, and only a subset of those mechanically evaluable statements resolve to true or false. This however does not prevent us from simply assigning a truth value to a statement, whether by a lookup table, or within the statement itself. However, if we do so, we can nonetheless apply basic principles of logic and mathematics, and if we adhere to them, we can exclude certain purported truth values that are the result of mere assertion. In this case, such a process implies that a statement that asserts its own truth value, is always true.

Compton Scattering

Introduction

My work in physics relies heavily on the Compton Wavelength, which as far as I know, was introduced solely to explain Compton Scattering. Wikipedia introduces the Compton Wavelength as a “Quantum Mechanical” property of particles, which is nonsense. Compton Scattering instead plainly demonstrates the particle nature of both light and electrons, since the related experiment literally pings an electron with an X-ray, causing both particles to scatter, just like billiard balls. I obviously have all kinds of issues with Quantum Mechanics, which I no longer think is physically real, but that’s not the point of this note.

Instead, the point of the note, is the implications of a more generalized form of the equation that governs Compton Scattering. Specifically, Arthur Compton proposed the following formula to describe the phenomena he observed when causing X-rays (i.e., photons) to collide with electrons.

\lambda' - \lambda = \frac{h}{m_ec} (1 - \cos(\theta)),

where \lambda' is the wavelength of the photon after scattering, \lambda is the wavelength of the photon before scattering, h is Planck’s constant, m_e is the mass of an electron, c is the velocity of light, and \theta is the scattering angle of the photon. Note that \frac{h}{m_ec}, which is the Compton Wavelength, is a constant in this case, but we will treat it as a variable below.

For intuition, if the inbound photon literally bounces straight back at \theta = 180\textdegree, then (1 - \cos(\theta)) evaluates to 2, maximizing the function at \lambda' - \lambda = 2 \frac{h}{m_ec}. Note that \lambda' - \lambda is the difference between the wavelength of the photon, before and after collision, and so in the case of a 180\textdegree bounce back, the photon loses the most energy possible (i.e., the wavelength becomes maximally longer after collision, decreasing energy, see Planck’s equation for more). In contrast, if the photon scatters in a straight line, effectively passing through the electron at an angle of \theta = 0\textdegree, then (1 - \cos(\theta)) = 0, implying that \lambda' - \lambda = 0. That is, the photon loses no energy at all in this case. This all makes intuitive sense, in that in the former case, the photon presumably interacts to the maximum possible extent with the electron, losing the maximum energy possible, causing it to recoil at a 180\textdegree angle, like a ball thrown straight at a wall. In contrast, if the photon effectively misses the electron, then it loses no energy at all, and simply continues onward in a straight line (i.e., a 0\textdegree angle).

All of this makes sense, and as you can see, it has nothing to do with Quantum Mechanics, which again, I think is basically fake at this point.

Treating Mass as a Variable

In the previous section, we treated the Compton Wavelength \frac{h}{m_ec} as a constant, since we were concerned only with photons colliding with electrons. But we can consider the equation as a specific instance of a more general equation, that is a function of some variable mass m. Now this obviously has some unstated practical limits, since you probably won’t get the same results bouncing a photon off of a macroscopic object, but we can consider e.g., heavier leptons like the Tau particle. This allows us to meaningfully question the equation, and if it holds generally as a function of mass, it could provide an insight into why this specific equation works. Most importantly for me, I have an explanation, that is consistent with the notion of a “horizontal particle” that I developed in my paper, A Computational Model of Time Dilation [1].

So let’s assume that the more general following form of equation holds as a function of mass:

\Delta = \lambda' - \lambda = \frac{h}{mc} (1 - \cos(\theta)).

Clearly, as we increase the mass m, we will decrease \Delta for any value of \theta. So let’s fix \theta = 180\textdegree to simplify matters, implying that the photon bounces right back to its source.

The fundamental question is, why would the photon lose less energy, as a function of the mass with which it interacts? I think I have an explanation, that actually translates well macroscopically. Imagine a wall of a fixed size, reasonably large enough so that it can be reliably struck by a ball traveling towards it. Let’s posit a mass so low (again, nonetheless of a fixed size) that the impact of the ball actually causes the wall to be displaced. If the wall rotates somewhat like a pinwheel, then it could strike the ball multiple times, and each interaction could independently reduce the energy of the ball.

This example clearly does not work for point particles, though it could work for waves, and it certainly does work for horizontal particles, for which the energy or mass (depending upon whether it is a photon or a massive particle) is spread about a line. You can visualize this as a set of sequential “beads” of energy / mass. This would give massive particles a literal wavelength, and cause a massive particle to occupy a volume over time when randomly rotating, increasing the probability of multiple interactions. For intuition, imagine randomly rotating a string of beads in 3-space.

Astonishingly, I show in [1], that the resultant wavelength of a horizontal massive particle is actually the Compton Wavelength. I also show that this concept implies the correct equations for time-dilation, momentum, electrostatic forces, magnetic forces, inertia, centrifugal forces, and more generally, present a totally unified theory of physics, in a much larger paper that includes [1], entitled A Combinatorial Model of Physics [2].

Returning to the problem at hand, the more massive a particle is, the more inertia it has, and so the rotational and more general displacement of the particle due to collision with the photon will be lower as a function of the particle’s mass. Further, assuming momentum is conserved, if the photon rotates (which Compton Scattering demonstrates as a clear possibility), regardless of whether it loses energy, that change in momentum must be offset by the particle with which it collides. The larger the mass of the particle, the less that particle will have to rotate in order to offset the photon’s change in momentum, again decreasing the overall displacement of that particle, in turn decreasing the probability of more than one interaction, assuming the particle is either a wave or a horizontal particle.

Conclusion

Though I obviously have rather aggressive views on the topic, if we accept that Compton’s Scattering equation holds generally (and I’m not sure it does), then we have a perfectly fine, mechanical explanation for it, if we assume elementary particles are waves or horizontal particles. So assuming all of this holds up, point particles don’t really work, which I think is obvious from the fact that light has a wavelength in the first instance, and is therefore not a point in space, and must at least be a line.

Uncertainty, Computability, and Physics

I’m working on formalizing an existing paper of mine on genetics that will cover the history of mankind using mtDNA and Machine Learning. Part of this process required me to reconsider the genome alignment I’ve been using, and this opened a huge can of worms yesterday, related to the very nature of reality and whether or not it is computable. This sounds like a tall order, but it’s real, and if you’re interested, you should keep reading. The work on the genetic alignment itself is basically done, and you can read about it here. The short story is, the genome alignment I’ve been using is almost certainly the unique and correct global alignment for mtDNA, for both theoretical and empirical reasons. But that’s not the interesting part.

Specifically, I started out by asking myself, what if I compare a genome to itself, except I shift one copy of the genome by a single base. A genome is a string of labels, e.g., g = (A,C,G,T). So if I were to shift g by one base in a modulo style (note that mtDNA is a loop), I would have \bar{g} = (T,A,C,G), shifting each base by one index, and wrapping T around to the front of the genome. Before shifting, a genome is obviously a perfect match to itself, and so the number of matching bases between g and itself is 4. However, once I shift it by one base, the match count between g and \bar{g} is 0. Now this is a fabricated example, but the intuition is already there: shifting a string by one index could conceivably completely scramble the comparison, potentially producing random results.

Kolmogorov Complexity

Andrey Kolmogorov defined a string v as random, which we now call Kolmogorov Random in his honor, if there is no compressed representation of v that can be run on a UTM, generating v as its output. The length of the shortest program x that generates v on a UTM, i.e., v = U(x), is called the Kolmogorov Complexity of v. As a result, if a string is Kolmogorov Random, then the Kolmogorov Complexity of that string should be approximately its length, i.e., the shortest program that produces v is basically just a print function that takes v as its input, and prints v to the tape as output. As such, we typically say that the Kolmogorov Complexity of a Kolmogorov Random string v is K(v) = |v| + C, where |v| denotes the length of v, and C is a constant given by the length of the print function.

So now let’s assume I have a Kolmogorov Random string v, and I again shift it by one base in a modulo style, producing \bar{v}. Assume that \bar{v} is not Kolmogorov Random, and further, let n denote the length of v and \bar{v}. Now consider the string s = \bar{v}(2:n) = v(1:n-1), i.e., entires 2 through n of \bar{v}, and entries 1 through n - 1 of v.  If s is not Kolmogorov Random, then it can be compressed into some string x such that s = U(x), where U is some UTM and the length of x is significantly less than the length of s. But this implies that we can produce v, by first generating s = U(x), and then appending v(n) to the end of s. But this implies that v can be generated by another string that is significantly shorter than v itself, contradicting the assumption that v is Kolmogorov Random. Therefore, s must be Kolmogorov Random. Note that we can produce s by removing the first entry of \bar{v}. Therefore, if \bar{v} is not Kolmogorov Random, then we can produce s by first generating \bar{v} using a string significantly shorter than |s| = |\bar{v}| - 1, which contradicts the fact that s must be Kolmogorov Random. Therefore, \bar{v} is Kolmogorov Random.

This is actually a really serious result, that might allow us to test for randomness, by shifting a given string by one index, and testing whether comparing matching indexes produce statistically random results. Note that unfortunately, the Kolmogorov Complexity is itself non-computable, so we cannot test for randomness using the Kolmogorov Complexity itself, but as you can see, it is nonetheless a practical, and powerful notion.

Computation, Uncertainty, and Uniform Distributions

Now imagine we instead begin with the string v = (a,b,a,b,a,b,a,b). Clearly, if we shift by 1 index, the match count will drop to exactly 0, and if we shift again, it will jump back to 8, i.e., the full length of the string. This is much easier to do in your head, because the string has a clear pattern of alternating entries, and so a bit of thinking shows that shifting by 1 base will cause the match count to drop to 0. This suggests a more general concept, which is that uncertainty can arise from the need for computational work. That is, the answer to a question could be attainable, provided we perform some number of computations, and prior to that, the answer is otherwise unknown to us. In this case, the question is, what’s the match count after shifting by one base. Because the problem is simple, you can do this in your head.

But if I instead asked you the same question with respect to \bar{v} = (a,b,b,b,a,a,a,b,b,b,a,b,b,b,b,a,a,a,a,b,b,b,b,b,a,b,a,b,a,b,a,b), you’d probably have to grab a pen and paper, and carefully work out the answer. As such, your uncertainty with respect to the same question depends upon the subject of that question, specifically in this case, v and \bar{v}. The former is so simple, the answer is obvious regardless of how long the string is, whereas the latter is idiosyncratic, and therefore, requires more computational work. Intuitively, you can feel that your uncertainty is higher in the latter case, and it seems reasonable to connect that to the amount of computational work required to answer the question.

This leads to the case where you simply don’t have an algorithm, even if such an algorithm exists. That is, you simply don’t know how to solve the problem in question. If in this case, there is still some finite set of possible answers, then you effectively have a random variable. That is, the answer will be drawn from some finite set, and you have no means of calculating the answer, and therefore, no reason to distinguish between the various possible answers, producing a uniform distribution over the set of possible answers. This shows us that even a solvable, deterministic problem, can appear random due to subjective ignorance of the solution to the problem.

Deterministic Randomness

I recall a formal result that gives the density of Kolmogorov Random strings for a given length n, but I can’t seem to find it. However, you can easily show that there must be at least one Kolmogorov Random string of every length n. Specifically, the number of strings of length less than or equal to n is given by \sum_{i=0}^{n} 2^i = 2^{n+1} - 1. The number of strings of length n+1 is instead 2^{n+1}, and as such, there is at least 1 Kolmogorov Random string of length n+1, since there aren’t enough shorter codes. As a result, we can produce Kolmogorov Random strings by simply counting, and producing all strings of length n = 1, 2, 3, \ldots, though we cannot test them individually for randomness since the Kolmogorov Complexity is non-computable.

In fact, I proved a corollary that’s even stronger. Specifically, you can prove that a UTM cannot cherry pick the random strings that are generated by such a process. This is however a corollary of a related result, which we will prove first, that a UTM cannot increase the Kolmogorov Complexity of an input.

Let y = U(x). Since x generates y when x is given as the input to a UTM, this in turn implies that K(y) \leq K(x) + C. That is, we can generate y by first running the shortest program that will generate x, which has a length of K(x), and then feed x back into the UTM, which will in turn generate y. This is simply a UTM that runs twice, the code for which will have a length of C that does not depend upon x, which proves the result. That is, there is a UTM that always runs twice, and the code for that machine is independent of the particular x under consideration, and therefore its length is given by a constant C. As such, the complexity of the output of a UTM is strictly less than or equal to the complexity of its input.

This is a counterintuitive result, because we think of machines as doing computational work, and that connotes new information is being produced, but in the strictest sense, this is just not true. Now, as noted above, computational work is often required to answer questions, and so in that regard, computational work can alleviate uncertainty, but it cannot increase complexity in the sense of the Kolmogorov Complexity. Now we’re ready for the second result, which is that a UTM cannot cherry pick Kolmogorov Random strings.

Assume that we have some program x that generates strings, at least some of which are Kolmogorov Random, and that U(x) never stops producing output. Because U(x) never terminates, and there are only so many strings of a given length, the strings generated by U(x) must eventually increase in length, and that cannot be a bounded process. As such, if U(x) never stops generating Kolmogorov Random strings, then those Kolmogorov Random strings must eventually increase in length, and that again cannot be a bounded process, producing arbitrarily long Kolmogorov Random strings. This implies that U(x) will eventually generate a Kolmogorov Random string y, such that |y| > |x|. However, this implies that K(y) > K(x). Note that the result above proves that a UTM cannot add complexity to its input. Therefore, if U(x) eventually generates y then there cannot be some other program that can isolate y as output from the rest of the output generated by U(x), otherwise the result above would be contradicted.

This second result shows that there are serious limitations on the ability of a UTM to deterministically separate random and non-random strings. Specifically, though it’s clear that a UTM can generate random strings, they cannot be isolated from the rest of the output, if the random strings are unbounded in length.

Computable Physics

Now we’re ready for a serious talk on physics. When people say, “that’s random”, or “this is a random variable”, the connotation is that something other than a mechanical process (i.e., a UTM) created the experience or artifact in question. This is almost definitional once we have the Kolmogorov Complexity, because in order for a string to be random, it must be Kolmogorov Random, which means that it was not produced by a UTM in any meaningful way, and was instead simply printed to the output tape, with no real computational work. So where did the random string come from in the first instance?

We can posit the existence of random sources in nature, as distinct from computable sources, but why would you do this? The more honest epistemological posture is that physics is computable, which allows for Kolmogorov Random artifacts, and non-random artifacts, since again, UTMs can produce Kolmogorov Random strings. There are however, as shown above, restrictions on the ability of a UTM to isolate Kolmogorov Random strings from non-random strings. So what? This is consistent with a reality comprised of a mix of random and non-random artifacts, which sounds about right.

Now what’s interesting, is that because integers and other discrete structures are obviously physically real, we still have non-computable properties of reality, since e.g., the integers must have non-computable properties (i.e., the set of properties over the integers is uncountable). Putting it all together, we have a computable model of physics, that is capable of producing both random and non-random artifacts, with at least some limitations, and a more abstract framework of mathematics itself that also governs reality in a non-mechanical manner, that nonetheless has non-computable properties.

mtDNA Alignment

I’m planning on turning my work on mtDNA into a truly formal paper, that is more than just the application of Machine Learning to mtDNA, and is instead a formal piece on the history of humanity. As part of that effort, I revisited the global alignment I use (which is discussed here), attempting to put it on a truly rigorous basis. I have done exactly that. This is just a brief note, I’ll write something reasonably formal tomorrow, but the work is done.

First, there’s a theoretical question: How likely are we to find the 15 bases I use as the prefix (i.e., starting point) for the alignment, in a given mtDNA genome? You can find these 15 bases by simply looking at basically any mtDNA FASTA file in the NIH website, since they plainly use this same alignment. Just look at the first 15 bases (CTRL + F “gatcacaggt”), you’ll see them. Getting back to the probability of finding a particular sequence of 15 bases in a given mtDNA genome, the answer is not very likely, so we should be impressed that 98.34% of the 664 genomes in the dataset contain exactly the same 15 bases, and the remainder contain what is plainly the result of an insertion / deletion that altered that same sequence.

Consider first that there are 4^{15} = 1,073,741,824 sequences of bases of length 15, since there are 4 possible bases, ACGT. We want to know how likely it is that we find a given fixed sequence of length 15, anywhere in the mtDNA genome. If we find it more than once, that’s great, we’re just interested initially in the probability of finding it at least once. The only case that does not satisfy this criteria, is the case where it’s not found at all. The probability of two random 15 base sequences successfully matching at all 15 bases is \frac{1}{4^{15}}. Note that a full mtDNA genome contains N = 16579 bases. As such, we have to consider comparing 15 bases starting at any one of the N - 14 indexes available for comparison, again considering all cases where it’s found at least once as a success.

This is similar to asking the probability of tossing at least one heads with a coin over some number of N - 14 trials. However, note in this case, the probability of success and failure are unequal. Since the probability of success at a given index is given by p_s = \frac{1}{4^{15}}, the probability of failure at a given index is p_f = 1 - p_s. Therefore, the probability that we find zero matches over all N - 14 indexes is given by P_f = p_f^{N-14}, and so the probability that we find at least one match is given by 1 - P_f = 0.0000154272. That’s a pretty small probability, so we should already be impressed that we find this specific sequence of 15 bases in basically all human mtDNA genomes in the dataset.

I also tested how many instances of this sequence there are in a given genome, and the answer is either exactly 1, or 0, and never more, and as noted above, 98.34% of the 664 genomes in the dataset contain the exact sequence in full.

So that’s great, but what if these 15 bases have a special function, and that’s why they’re in basically every genome? The argument would be, sure these are special bases, but they don’t mark an alignment, they’re just in basically all genomes, at different locations, for some functional reason. We can address this question empirically, but first I’ll note that every mtDNA genome has what’s known as a D Loop, suggesting again, there’s an objective structure to mtDNA.

The empirical test is based upon the fact that mtDNA is incredibly stable, and offspring generally receive a perfect copy from their mother, with no mutations, though mutations can occur over large periods of time. As a result, the “true” global alignment for mtDNA should be able to produce basically perfect matches between genomes. Because there are 16,579 bases, there are 16,579 possible global alignments. The attached code tests all such alignments, and asks, which alignments are able to exceed 99% matches between two genomes? Of the alignments that are able to exceed 99%, 99.41% of those alignments are the default NIH alignment, suggesting that they are using the true, global alignment for mtDNA.

Code attached, more to come soon!