December 2022 A no-free-lunch theorem for multitask learning
Steve Hanneke, Samory Kpotufe
Author Affiliations +
Ann. Statist. 50(6): 3119-3143 (December 2022). DOI: 10.1214/22-AOS2189

Abstract

Multitask learning and related areas such as multisource domain adaptation address modern settings where data sets from N related distributions {Pt} are to be combined toward improving performance on any single such distribution D. A perplexing fact remains in the evolving theory on the subject: while we would hope for performance bounds that account for the contribution from multiple tasks, the vast majority of analyses result in bounds that improve at best in the number n of samples per task, but most often do not improve in N. As such, it might seem at first that the distributional settings or aggregation procedures considered in such analyses might be somehow unfavorable; however, as we show, the picture happens to be more nuanced, with interestingly hard regimes that might appear otherwise favorable.

In particular, we consider a seemingly favorable classification scenario where all tasks Pt share a common optimal classifier h, and which can be shown to admit a broad range of regimes with improved oracle rates in terms of N and n. Some of our main results are:

∙ We show that, even though such regimes admit minimax rates accounting for both n and N, no adaptive algorithm exists, that is, without access to distributional information, no algorithm can guarantee rates that improve with large N for n fixed.

∙ With a bit of additional information, namely, a ranking of tasks {Pt} according to their distance to a target D, a simple rank-based procedure can achieve near optimal aggregations of tasks’ data sets, despite a search space exponential in N. Interestingly, the optimal aggregation might exclude certain tasks, even though they all share the same h.

Acknowledgments

The second author was visiting the Institute of Advanced Studies, and Google Research in Princeton, NJ, for a major part of the work.

Citation

Download Citation

Steve Hanneke. Samory Kpotufe. "A no-free-lunch theorem for multitask learning." Ann. Statist. 50 (6) 3119 - 3143, December 2022. https://doi.org/10.1214/22-AOS2189

Information

Received: 1 July 2020; Revised: 1 February 2022; Published: December 2022
First available in Project Euclid: 21 December 2022

MathSciNet: MR4524491
zbMATH: 07641120
Digital Object Identifier: 10.1214/22-AOS2189

Subjects:
Primary: 62H30 , 68Q32 , 68T05
Secondary: 68T10

Keywords: ‎classification‎ , multitask learning , statistical learning theory , transfer learning

Rights: Copyright © 2022 Institute of Mathematical Statistics

JOURNAL ARTICLE
25 PAGES

This article is only available to subscribers.
It is not available for individual sale.
+ SAVE TO MY LIBRARY

Vol.50 • No. 6 • December 2022
Back to Top