Recent developments in large-scale machine learning suggest that by scaling up data, model size, and training time properly, one might observe that improvements in pre-training would transfer favorably to most downstream tasks. In this work, we systematically study this phenomenon and establish that, as we increase the upstream accuracy, the performance of downstream tasks saturates.
In particular, we investigate more than 4800 experiments on Vision Transformers, MLP-Mixers, and ResNets with a number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data (JFT, ImageNet21K) and evaluated on more than 20 downstream image recognition tasks. We propose a model for downstream performance that reflects the saturation phenomena and captures the nonlinear relationship in the performance of upstream and downstream tasks. Delving deeper to understand the reasons that give rise to these phenomena, we show that the saturation behavior we observe is closely related to the way that representations evolve through the layers of the models. We showcase an even more extreme scenario where performance on upstream and downstream are at odds with each other. That is, to have a better downstream performance, we need to hurt upstream accuracy.
Dr. Hanié Sedghi is a senior research scientist at Google Brain, where she leads the "Deep Phenomena" research group. Her approach is to bond theory and practice in large-scale machine learning. Her primary research interest is understanding and improving deep learning.
Prior to Google, she was a research scientist at Allen Institute for Artificial Intelligence and before that, a postdoctoral fellow at the University of California, Irvine. Hanié received her Ph.D. from the University of Southern California with a minor in mathematics and her MSc and BSc from the School of Electrical and Electronic Engineering at Sharif University of Technology, Iran.