Confidence-Based Prediction (Single Iteration)

I’m in the process of implementing my software for MacOS for commercial sale in the Apple App Store, and in particular, I’m marketing my confidence-based prediction as a “Magic Button”, because it actually does increase accuracy radically, as a function confidence. The issue is however, that I’ve tested it on average, over a large number of iterations, to be sure that I haven’t relied upon anomalies –

I.e., I tested it on several hundred random training / testing datasets, given an entire dataset.

This produced increasing accuracy, on average, as a function of confidence, regardless of the dataset, which suggests my ideas are correct, as an empirical matter. However, when applying this software in the real world, you have to be right the first time, not on average, because it doesn’t matter if your ideas are right, in general, it matters that they can be applied, every time. And so this led to a little tinkering with the equation for delta, and the result is awesome, consistently producing 90%+ accuracy, the first time, not on average, regardless of the dataset. Below is a plot of accuracy as a function of confidence, for the UCI Parkinson’s Dataset, which you’ll note is not as smooth as the curves produced in my previous note, because there’s no averaging, this is one shot, and it’s right.

The attached code is cued up for the UCI Credit Dataset.


Discover more from Information Overload

Subscribe to get the latest posts sent to your email.

Leave a comment