Browsing by Author "Peng, Xi"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Out-of-Domain Generalization From a Single Source: An Uncertainty Quantification Approach(IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022-06-20) Peng, Xi; Qiao, Fengchun; Zhao, LongWe are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose Meta-Learning based Adversarial Domain Augmentation to solve this Out-of-Domain generalization problem. The key idea is to leverage adversarial training to create “fictitious” yet “challenging” populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder to relax the widely used worst-case constraint. We further improve our method by integrating uncertainty quantification for efficient domain generalization. Extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.Item Region-aware Arbitrary-shaped Text Detection with Progressive Fusion(IEEE Transactions on Multimedia, 2022-08-04) Wang, Qitong; Fu, Bin; Li, Ming; He, Junjun; Peng, Xi; Qiao, YuSegmentation-based text detectors are flexible to capture arbitrary-shaped text regions. Due to large geometry variance, it is necessary to construct effective and robust representations to identify text regions with various shapes and scales. In this paper, we focus on designing effective multi-scale contextual features for locating text instances. Specially, we develop a Region Context Module (RCM) to summarize the semantic response and adaptively extract text-region-aware information in a limited local area. To construct complementary multi-scale contextual representations, multiple RCM branches with different scales are employed and integrated via Progressive Fusion Module (PFM). Our proposed RCM and PFM serve as the plug-and-play modules which can be incorporated into existing scene text detection platforms to further boost detection performance. Extensive experiments show that our methods achieve state-of-the-art performances on Total-Text, SCUT-CTW1500 and MSRA-TD500 datasets. The code with models will become publicly available at https://github.com/wqtwjt1996/RP-Text.Item Semi-Identical Twins Variational AutoEncoder for Few-Shot Learning(IEEE Transactions on Neural Networks and Learning Systems, 2023-01-09) Zhang, Yi; Huang, Sheng; Peng, Xi; Yang, DanData augmentation is a popular way for few-shot learning (FSL). It generates more samples as supplements and then transforms the FSL task into a common supervised learning problem for a solution. However, most data-augmentation-based FSL approaches only consider the prior visual knowledge for feature generation, thereby leading to low diversity and poor quality of generated data. In this study, we attempt to address this issue by incorporating both prior visual and prior semantic knowledge to condition the feature generation process. Inspired by some genetic characteristics of semi-identical twins, a novel multimodal generative FSL approach was developed named semi-identical twins variational autoencoder (STVAE) to better exploit the complementarity of these modality information by considering the multimodal conditional feature generation process as a process that semi-identical twins are born and collaborate to simulate their father. STVAE conducts feature synthesis by pairing two conditional variational autoencoders (CVAEs) with the same seed but different modality conditions. Subsequently, the generated features of two CVAEs are considered as semi-identical twins and adaptively combined to yield the final feature, which is considered as their fake father. STVAE requires that the final feature can be converted back into its paired conditions while ensuring these conditions remain consistent with the original in both representation and function. Moreover, STVAE is able to work in the partial modality-absence case due to the adaptive linear feature combination strategy. STVAE essentially provides a novel idea to exploit the complementarity of different modality prior information inspired by genetics in FSL. Extensive experimental results demonstrate that our work achieves promising performances in comparison to the recent state-of-the-art approaches, as well as validate its effectiveness on FSL under various modality settings.