As noted, I’m writing a MacOS version of my core A.I. software, and because Swift is not vectorized, it’s not exactly the same, which requires me to reevaluate my code. In this case, I took a look at my normalization script, and it’s so complicated, I simply redid it in Octave, for the sole of purpose of rewriting it in Swift (see attached). The idea is the same:
You cluster the dimensions by number of digits, and then iteratively reduce the largest dimensions (i.e., the cluster that contains the largest dimension) by dividing by powers of ten, and test the accuracy at each scale. This is just an easier way of expressing the same idea.
Discover more from Information Overload
Subscribe to get the latest posts sent to your email.