Abstract
We study the multitask learning problem that aims to simultaneously analyze multiple data sets collected from different sources and learn one model for each of them. We propose a family of adaptive methods that automatically utilize possible similarities among those tasks while carefully handling their differences. We derive sharp statistical guarantees for the methods and prove their robustness against outlier tasks. Numerical experiments on synthetic and real data sets demonstrate the efficacy of our new methods.
Funding Statement
Kaizheng Wang’s research is supported by a NSF Grant DMS-2210907 and a startup grant at Columbia University. We acknowledge computing resources from Columbia University’s Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. Part of the research was conducted when Yaqi Duan was affiliated with the Laboratory for Information and Decision Systems at Massachusetts Institute of Technology and the Department of Operations Research and Financial Engineering at Princeton University.
Acknowledgment
We are grateful to two anonymous referees and the Associate Editor for their helpful comments. We thank Chen Dan, Dongming Huang, Yuhang Wu and Yichen Zhang for discussions.
Citation
Yaqi Duan. Kaizheng Wang. "Adaptive and robust multi-task learning." Ann. Statist. 51 (5) 2015 - 2039, October 2023. https://doi.org/10.1214/23-AOS2319
Information