Out-of-Domain Generalization From a Single Source: An Uncertainty Quantification Approach
Date
2022-06-20
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE Transactions on Pattern Analysis and Machine Intelligence
Abstract
We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose Meta-Learning based Adversarial Domain Augmentation to solve this Out-of-Domain generalization problem. The key idea is to leverage adversarial training to create “fictitious” yet “challenging” populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder to relax the widely used worst-case constraint. We further improve our method by integrating uncertainty quantification for efficient domain generalization. Extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.
Description
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This article was originally published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The version of record is available at: https://doi.org/10.1109/TPAMI.2022.3184598
Keywords
adversarial training, domain generalization, meta-learning, uncertainty quantification
Citation
X. Peng, F. Qiao and L. Zhao, "Out-of-Domain Generalization From a Single Source: An Uncertainty Quantification Approach," in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, doi: 10.1109/TPAMI.2022.3184598.