Friday, November 6, 2015

Toyota Invests $1 Billion in Machine Learning; Will we become slaves to clever machines?


"Machine Learning" is a very hot new field that is taking over from the older field of Artificial Intelligence, and is central to increasingly ubiquitous technologies such as Siri, Watson and self-driving cars. It also has increasingly strong links to neuroscience, and draws on applied math, statistics, physics and computer science. It's sometimes referred to as the "New AI".
ML is essentially the science of learning by machines (especially computers). Since the central assumption underlying neuroscience is that the brain is a machine, and since neural plasticity and learning are fundamental to brain function, especially in mammals, the 2 sciences are natural allies.
In today's New York Times a front page article (http://www.nytimes.com/2015/11/06/technology/toyota-silicon-valley-artificial-intelligence-research-center.html?emc=eta1) reveals that Toyota is investing $1B in ML in Silicon Valley, already the epicenter of ML. The same page features a banner ad by IBM touting Watson and the "Cognitive Era".

Why has ML moved to the fore? First, it's increasingly realized that learning is the key to intelligence. Indeed, one could almost define intelligence as the ability to learn how to solve problems - any problem, but especially new problems. Second, there's an increasing focus on using rigorous, quantitative approaches, often based on statistics. In particular so-called "Bayesian statistics" - a systematic approach to improving one's hypotheses as new information becomes available. Third, rapid (though somewhat decelerating) advances in computing power allow the heavy number crunching required by ML techniques. Fourth, some of the most powerful ML approaches are partly inspired by neuroscience, so advances in both fields are synergistic.

In the course we already touched on one of the simplest and oldest examples of ML when we considered motor learning in the cerebellum. We saw that parallel fibers make synapses on Purkinje neurons, and these can automatically change their strength based on 2 coincident factors, the parallel fiber firing (signalled by glutamate release) and an error signal (conveyed by climbing fiber firing). We formulated this as "weight decrease at synapse number i is proportional to PF number i firing rate times CF firing rate" - sometimes known as the "delta rule".
Clearly once the movement error goes to zero under this rule the PF strengths will stop changing, suggesting that a Purkinje cell might learn to fire in the way needed for accurate movements (by inhibiting deep cerebellar neurons that influence movement details). However we did not actually prove that this delta rule always improves things (which requires the implicit assumption that there ARE PF synapse strengths that allow perfect movements.)

Clearly this delta rule has a "Hebbian" flavor (see my last post) - synapse strength change depends on both input and output firing. Related rules underlie much of the most sophisticated new ML techniques.

Will ML succeed, and if so would machines take over our jobs, condemning almost all of us to abject poverty? If ML is to succeed it requires that our machines (e.g. computers) can do the required number crunching, and this tends to become prohibitively expensive as the numbers increase. So far Moore's Law has allowed  hardware to keep up with software, but this is now slowing, and researchers are exploring "neuromorphic" (brainlike) strategies. But it's not yet clear that implementing Hebbian synapses at extremely high density is straightforward either for the brain or for machines (see my last post).

 This is really not just a scientific question, but also one about politics and morality: should the owners of these technologies become the new economic aristocrats that the USA was founded to eliminate? In the meantime it's an exciting period in neuroscience and AI.

No comments:

Post a Comment