"As Eric and EY jointly point out, this article ..."

https://arbital.com/p/89c

by Ryan Carey Apr 24 2017


As Eric and EY jointly point out, this article seems to be roughly pointing at a simple classifier that places a big penalty on false positives, e.g.: loss = 100(1-lambda)(falsepositiverate) + (1-lambda)(falsenegativerate) + lambdaregularization

After all, the purpose of regularization is to ensure simplicity.

To the extent that conservative concepts are at all different, it should run through the notion of ambiguity detection and KWIK learning. At least that's what machine learning people will round the proposal off to until they have some other concrete proposals. Though maybe I'm missing something.