| Issue |
A&A
Volume 703, November 2025
|
|
|---|---|---|
| Article Number | A41 | |
| Number of page(s) | 13 | |
| Section | Numerical methods and codes | |
| DOI | https://doi.org/10.1051/0004-6361/202554289 | |
| Published online | 31 October 2025 | |
Leveraging pre-trained vision Transformers for multi-band photometric light curve classification
1
Department of Computer Science, Universidad de Conceptión,
Edmundo Larenas 219,
Conceptión,
Chile
2
Center for Data and Artificial Intelligence, Universidad de Conceptión,
Edmundo Larenas 310,
Conceptión,
Chile
3
John A. Paulson School of Engineering and Applied Science, Harvard University,
Cambridge,
MA
02138,
USA
4
Heidelberg Institute for Theoretical Studies,
Heidelberg,
Baden-Württemberg,
Germany
5
Millennium Institute of Astrophysics (MAS),
Nuncio Monseñor Sotero Sanz 100, Of. 104, Providencia,
Santiago,
Chile
6
Millennium Nucleus on Young Exoplanets and their Moons (YEMS),
Chile
7
Edinburgh Futures Institute, University of Edinburgh,
1 Lauriston Pl,
Edinburgh
EH3 9EF,
UK
★ Corresponding author: dmoreno2016@inf.udec.cl
Received:
27
February
2025
Accepted:
8
August
2025
Context. The advent of large-scale sky surveys, such as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), will generate vast volumes of photometric data, necessitating automatic classification of light curves to identify variable stars and transient events. However, challenges such as irregular sampling, multi-band observations, and diverse flux distributions across bands demand advanced models for accurate classification.
Aims. This study investigates the potential of a pre-trained vision Transformer (VT) model, specifically the Swin Transformer V2 (SwinV2), to classify photometric light curves without the need for feature extraction or multi-band preprocessing. The goal is to assess whether this image-based approach can accurately differentiate astronomical phenomena and if it can serve as a viable option for working with multi-band photometric light curves.
Methods. We transformed each multi-band light curve into an image. These images served as input to the SwinV2 model, which was pre-trained on ImageNet-21K. The datasets employed include the public Catalog of Variable Stars from the Massive Compact Halo Object (MACHO) survey, using both one and two bands, and the first round of the recent Extended LSST Astronomical Time-Series Classification Challenge (ELAsTiCC), which includes six bands. The model’s performance was evaluated based on six classes for the MACHO dataset and 20 distinct classes of variable stars and transient events for the ELAsTiCC dataset.
Results. The fine-tuned SwinV2 model achieved a better performance than models specifically designed for light curves, such as Astromer and the Astronomical Transformer for time series And Tabular data (ATAT). When trained on the “full dataset” of MACHO, it attained a macro F1-score of 80.2% and outperformed Astromer in single-band experiments. Incorporating a second band further improved performance, increasing the F1-score to 84.1%. In the ELAsTiCC dataset, SwinV2 achieved a macro F1-score of 65.5%, slightly surpassing ATAT by 1.3%.
Conclusions. SwinV2, a pre-trained VT model, effectively classifies photometric light curves. It outperforms traditional models and offers a promising approach for large-scale surveys. This highlights the potential of using visual representations of light curves, with future prospects including the integration of tabular data, textual information, and multi-modal learning to enhance analysis and classification in time-domain astronomy.
Key words: methods: data analysis / methods: statistical / surveys / supernovae: general / stars: variables: general
© The Authors 2025
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.