Dr. Cameron Buckner Department of Philosophy University of Houston March 31st, 2021 at 4:00pm Registration required [Click here to register]. This online lecture, co-sponsored by the Department of Philosophy and the Institute for Artificial Intelligence, is part of the Scott & Heather Kleiner Lecture Series. After registering, you will receive a confirmation email containing information about joining the meeting. Abstract Over the last five years, deep neural networks have accomplished feats that skeptics thought would remain beyond the reach of artificial intelligence for at least several more decades. The researchers who developed these networks argue their success derives from their ability to construct increasingly abstract, hierarchically-structured representations of the environment. Skeptics of deep learning, however, point to the bizarre ways that deep neural networks seem to fail—especially illustrated by their responses to "adversarial examples", where small modifications of images that are imperceptible or incoherent to humans can dramatically change the networks' decisions—to argue that they are not capable of meaningful abstractions at all. In this talk, I draw on the work of empiricist philosophers like Locke and Hume to articulate four different methods of abstraction that deep neural networks can apply to their inputs to build general category representations. I then review recent empirical research which raises an intriguing possibility: that the apparently bizarre performance of deep neural networks on adversarial examples may actually illustrate that when we increase their parameters beyond biologically-plausible ranges, they can use those same methods of abstraction to discover real and useful properties that lie beyond human ken. This might allow these networks to blow past the frontier of human understanding in scientific domains characterized by extreme complexity—such as particle physics, protein folding, and neuroscience—but possibly on the condition that humans can never fully understand the artificial systems' discoveries. I end by offering some guiding principles for exploring this inscrutable terrain, which contains both dangers and opportunities. Specifically, I argue that machine learning here is rediscovering classic problems with scientific reasoning from philosophy (the "riddles of induction"), and that we need to develop new methods to decide which inscrutable properties are suitable targets of scientific research and which are just the distinctive processing artifacts of deep learning. Biography Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. He began his academic career in logic-based artificial intelligence. This research inspired an interest into the relationship between classical models of reasoning and the (usually very different) ways that humans and animals actually solve problems, which led him to the discipline of philosophy. He received a PhD in Philosophy at Indiana University in 2011 and an Alexander von Humboldt Postdoctoral Fellowship at Ruhr-University Bochum from 2011 to 2013. His research interests lie at the intersection of philosophy of mind, philosophy of science, animal cognition, and artificial intelligence, and he teaches classes on all these topics. Recent representative publications include “Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks” (2018, Synthese), and “Rational Inference: The Lowest Bounds” (2017, Philosophy and Phenomenological Research)—the latter of which won the American Philosophical Association's Article Prize for the period of 2016–2018. He is currently writing a book about the philosophy of deep learning (with support from the National Science Foundation).