I spend a lot of time thinking about signal and noise, and my intuition is that a signal is something that has a low Kolmogorov Complexity, and noise causes it to have a higher Kolmogorov Complexity. That sounds nice, and I think it makes sense, but the question is, what can I do with this definition? I think the answer so far is nothing, and it’s purely academic. However, it just dawned on me that if noise is statistically random, then it should begin to cancel itself out when it’s sampled independently at a large number of positions. Just imagine pure noise, and sample at different places at the same time – it should either net to zero (additive and subtractive noise), or increase in amplitude (strictly additive noise) as your number of samples grows sufficiently large. This in turn implies that we should be able to discern between true noise, and a signal subject to noise, by increasing the number of independent samples of the ostensible signal. Specifically, if it grows to nothing, or increases in amplitude, without changing structure, then it’s pure noise. If it instead changes structure, then it suggests that there is an underlying signal, subject to additive and subtractive noise that is starting to net itself out. In contrast, I don’t think there is any obvious solution to strictly additive noise. Nonetheless, this is a pretty good practical definition of signal versus noise, that definitely works, and I just wrote some simple code in Octave where you increase the number of independent noisy representations of a signal, and the average plainly reduces noise, as a function of the number of independent representations.
Discover more from Information Overload
Subscribe to get the latest posts sent to your email.