Associated Incidents
Loading...

I appreciate what MIT has done here. However, the work described in the article is useless, misleading, or both. All the training methods I've encountered in the literature do not maintain any kind of measure of the mutual information between test images and training images. In short, the trained NN blindly spits out classifications with equal confidence for test images that are very similar to those it has been trained on, and test images that are very different from those it has been trained on. Feeding a Rorschach blot to a NN trained on a gore subreddit is no different than feeding static on the inputs which, in turn, is just a roundabout way of assembling a set of substrings from the subreddit and calling choice(1) on that set.