A new way to explain neural networks

Ben Plomion
Contributor
Ben Plomion is the chief image scientist of GumGum.

By now, most of us have a general idea of what a neural network is, at least insomuch as its role in enabling the “machine learning” part of what’s considered AI today.  Also known as deep learning, neural networks are the algorithmic constructs that enable machines to get better at everything from facial recognition and car collision avoidance to medical diagnoses and natural-language processing.

Explaining exactly how artificial neural networks (ANN) work in a mathless way can sometimes feel like a lost cause, though. They’re often likened to neural pathways in the human brain, but that’s not quite it, either, and the comparison is lost on anyone who didn’t pay attention in science class.

So maybe it’s time for a new analogy, which is precisely what filmmaker Ben Sharony and PokeGravy Studios have done in A.N.N., an animated short, which they created for us. With a music score by Edmund Jolliffe, the video follows the story of A.N.N. (pronounced “Ann”), a quirky computer that doesn’t quite fit in with all the other computers, which like to be “fed” information.

A.N.N., however, prefers to learn on her own. The video then follows this computer-as-neural network as she learns how to identify (and find) an object, which starts off as a mere hashtag in the eyes of the computer. A.N.N. makes several mistakes, until, through trial and error (and feedback that nicely sums up the backpropagation process), she finally learns to identify (and find) the proper item.

In many ways, deep learning is that simple. In the case of identifying a particular object, an image recognition neural network would break down and look at different features such as the shape, color, and surface of the object, and, through trial and error, and subsequent back-propagation to tweak the algorithms, eventually narrow down its predictions to something accurate.

from TechCrunch https://tcrn.ch/2NGoxMh
via IFTTT