Self-supervised radio representation learning: Can we learn multiple tasks?
Ogechukwu Kanu, Ashkan Eshaghbeigi, and Hatem Abou-Zeid
In ICC 2025-IEEE International Conference on Communications, 2025
Artificial intelligence (AI) is anticipated to play a pivotal role in 6G. However, a key challenge in developing AI-powered solutions is the extensive data collection and labeling efforts required to train supervised deep learning models. To overcome this, self-supervised learning (SSL) approaches have recently demonstrated remarkable success across various domains by leveraging large volumes of unlabeled data to achieve nearsupervised performance. In this paper, we propose an effective SSL scheme for radio signal representation learning using momentum contrast. By applying contrastive learning, our method extracts robust, transferable representations from a large realworld dataset. We assess the generalizability of these learned representations across two wireless communications tasks: angle of arrival (AOA) estimation and automatic modulation classification (AMC). Our results show that carefully designed augmentations and diverse data enable contrastive learning to produce highquality, invariant latent representations. These representations are effective even with frozen encoder weights, and fine-tuning further enhances performance, surpassing supervised baselines. To the best of our knowledge, this is the first work to propose and demonstrate the effectiveness of self-supervised learning for radio signals across multiple tasks. Our findings highlight the potential of self-supervised learning to transform AI for wireless communications by reducing dependence on labeled data and improving model generalization - paving the way for scalable foundational 6G AI models and solutions.