Thursday, November 9, 8:30 – 9:30 am
Chesapeake Ballroom

Speaker: Yoshua Bengio, Professor, Director of MILA, Department of Computer Science and Operations Research and Canada Research Chair in Statistical Learning Algorithms, University of Montreal, Canada

Chair: Matt Davis, MRC Cognition and Brain Sciences Unit, Cambridge, UK

Yoshua Bengio is the world-leader expert on Deep Learning and author of the best selling book on that topic. His research objective is to understand the mathematical and computational principles, which give rise to intelligence through learning. He contributed to a wide spectrum of machine learning areas and is well known for his theoretical results on recurrent neural networks, kernel machines, distributed representations, depth of neural architectures, and the optimization challenge of deep learning. His work was crucial in advancing how deep networks are trained, how neural networks can learn vector embeddings for words, how to perform machine translation with deep learning by taking advantage of an attention mechanism, and how to perform unsupervised learning with deep generative models. He is the author of three books and more than 300 publications, is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks.

Bridging the gap between brains, cognition and deep learning

Connectionist ideas from three decades ago have fuelled a revolution in artificial intelligence with the rise of deep learning methods. Both the older connectionist ideas and the newer ones owe a lot to inspiration from the brain, but the gap between deep learning and neuroscience remains wide. We lay down some of these old ideas, based on learning distributed representations in order to jointly optimize by a gradient-based method all the modules of the system with respect to an objective function linked to a task or to capturing many aspects of the observed data. We also discuss the new ideas from deep learning, including a discussion of the newly acquired theoretical understanding of the advantages brought by jointly optimizing a deep architecture. Finally, we summarize some of the recent work aimed at bridging the remaining gap between deep learning and neuroscience, including approaches to implement functional equivalents to backpropagation in a more biologically plausible way, as well as ongoing work connecting language, cognition, reinforcement learning and the learning of abstract representations.

Save

Save

Save