Open Access
Issue
A&A
Volume 708, April 2026
Article Number L3
Number of page(s) 9
Section Letters to the Editor
DOI https://doi.org/10.1051/0004-6361/202558121
Published online 26 March 2026

© The Authors 2026

Licence Creative CommonsOpen Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article is published in open access under the Subscribe to Open model. This email address is being protected from spambots. You need JavaScript enabled to view it. to support open access publication.

1. Introduction

Combining multiple datasets obtained with different instruments is a common practice in astrophysics. Usually, different datasets are used independently to obtain complementary information on the chemical or physical characteristics of astronomical sources. Alternatively, one can directly combine some parts of datasets. For instance, Bacon et al. (2023) separated sources in MUSE data cubes by incorporating Hubble Space Telescope data. A similar but certainly more ambitious goal consists of performing a comprehensive data fusion in which complementary observations are merged into a single data cube that retains the highest spatial and spectral resolutions of each set (Fig. 1). While such a data fusion process has been widely used in Earth observation (Yokoya et al. 2017), fusion of astronomical data has not yet been accomplished due to several main challenges, notably that the astronomical wavelength ranges are typically large enough to result in non-negligible spectral variations of the optical point spread function (PSF) (Soulez et al. 2013; Hadj-Youcef et al. 2017), which significantly increases the complexity of the fusion process.

Thumbnail: Fig. 1. Refer to the following caption and surrounding text. Fig. 1.

Schematic illustration of astronomical data fusion. A high spatial resolution multispectral image (a) is fused with a high spatial resolution hyperspectral image (b) to produce a high spatial and spectral resolution hyperspectral cube (c). Credits: ESA/Hubble, NASA.

Despite these challenges, some recent works have already investigated the fusion of synthetic astronomical data. Their main achievements include designing computationally efficient algorithms (Guilloteau et al. 2020a; Pineau et al. 2023, 2025; Lascar et al. 2025) and generating realistic simulated data (Guilloteau et al. 2020b). However, none of the previous studies has considered real astronomical data. In this work, we demonstrate that–by leveraging the latest in situ instrument models, the comprehensive Near Infrared Camera (NIRCam) and Near Infrared Spectrograph (NIRSpec) documentation, and rigorous cross-calibration–a practical fusion of JWST data is now achievable.

2. Method

The JWST conducts astronomical observations in the near- to mid-infrared range with unprecedented quality (Gardner et al. 2023). Among its instruments, the NIRCam (Rieke et al. 2023) imager and the NIRSpec integral field unit (IFU) (Böker et al. 2022) covers a spectral range from 0.6 to 5 μm. The NIRCam imager is composed of 29 filters and two acquisition channels: the short wavelength channel, below 2.35 μm, which has a 0.031 arcsec pixel scale, and the long wavelength channel, above 2.35 μm, which has a 0.063 arcsec pixel scale (STScI 2024b). The NIRSpec IFU has a 0.1 arcsec pixel scale and four high resolution filter and disperser combinations (STScI 2024c). Its resolving power is around 2700, thus providing more than 9600 spectral channels.

We denote by Ym the multispectral image acquired by the NIRCam imager and by Yh the hyperspectral cube acquired by the NIRSpec IFU over the same scene. From this pair of data, we aim to reconstruct the underlying fused hyperspectral cube X defined at the NIRCam spatial resolution and the NIRSpec spectral resolution. To solve this problem, we framed the fusion task as a regularised inverse problem that first requires modelling of the observational processes. The high spatial and spectral resolution cube (X) to be reconstructed is assumed to be related to the NIRCam observations (Ym) according to the forward model

Y m N I R C a m ( X ) , Mathematical equation: $$ \begin{aligned} Y_{\mathrm{m} } \approx \mathsf {NIRCam} (X), \end{aligned} $$(1)

where NIRCam(⋅) stands for the operator accounting for the effects of the NIRCam imager throughput and PSF. Similarly for the NIRSpec IFU, the forward model is written as

Y h N I R S p e c ( X ) , Mathematical equation: $$ \begin{aligned} Y_{\mathrm{h} } \approx \mathsf {NIRSpec} (X), \end{aligned} $$(2)

where NIRSpec(⋅) resumes the spatial and spectral degradations resulting from the NIRSpec throughput, PSF, and spatial sub-sampling. It is worth mentioning that for the two instruments the PSF varies heavily across the wavelengths. The instrument models are detailed in Appendix B. Regarding the throughputs, the results reported hereafter were obtained by exploiting the latest available in situ measurements (Rieke et al. 2023; Giardino et al. 2022). We resorted to the theoretical PSF models derived by the WebbPSF simulation tool (Perrin et al. 2014) for their balance of simplicity and accuracy over newer alternatives (Nie et al. 2024). Equations (1) and (2) relate the observation models to the observations through approximations. The expected errors underlying the approximation symbols (≈) are related to imperfect instruments models, data pre-processing, and external and instrumental noises.

At this point, data fusion boils down to jointly inverting the two instrument models to recover the fused image, X. However, despite its linearity, this inversion is ill-posed, making solutions non-unique and highly sensitive to model mismatches and noise. The strategy we propose consists of reformulating the fusion task as a regularised least squares problem:

min X Y m N I R C a m ( X ) 2 + γ Y h N I R S p e c ( X ) 2 + R ( X ) , Mathematical equation: $$ \begin{aligned} \min _X \Vert Y_{\mathrm{m} } - \mathsf {NIRCam} (X)\Vert ^2 + \gamma \Vert Y_{\mathrm{h} } - \mathsf {NIRSpec} (X)\Vert ^2 + R(X), \end{aligned} $$(3)

where γ is a parameter adjusting the importance of the NIRSpec data fidelity term and R(⋅) is a regularisation. This formulation of the fusion task departs from the alternative that would exploit the noise statistics to derive a penalised log-likelihood. Indeed such an alternative requires establishing a proper noise model, which is very challenging without ensuring substantial enhancements of the fusion results (Pontoppidan et al. 2016; Guilloteau et al. 2020b,a). Through regularisation, we promote expected spectral and spatial properties exhibited by the fused cube X. First, X is assumed to be linearly represented by a few elementary spectra. Hence, the adopted spectral regularisation imposes a low-rank structure on the data cube X, which is constrained to belong to an affine subspace of lower dimension. This spectral subspace is spanned by the most relevant spectra identified beforehand by a principal component analysis of the NIRSpec data. Then, the spatial regularisation is derived from a Sobolev norm to promote a smooth spatial content of the fused image. This choice has the advantage of leading to a globally quadratic minimisation problem, which can be easily solved (see Appendix C).

3. Application to JWST data

Our goal is to fuse pairs of input NIRSpec and NIRCam data that cover the same spectral range and field of view (FoV), a practical context herein referred to as symmetric fusion. Fusing real data presents several key challenges: defining the feasible range for its application, finding corresponding data, and ensuring proper pre-processing, such as alignment and cross-calibration. The methodology to meet these challenges is described in Appendix B. The proposed so-called Symmetric Fusion (SyFu) algorithm combines the pre-processing and fusion steps. In what follows, we present its application to JWST observations whose specifications are detailed in Appendix D.

3.1. JWST data

In principle, symmetric fusion of JWST data can be performed across six distinct wavelength ranges (see Fig. D.1 and Table D.2). Yet we decided to restrict the available choices to the NIRCam short wavelength channel (λ < 2.35 μm) since it provides an angular resolution that is two times higher than the long wavelength channel. This choice thus allows the fusion process to reach the largest expected gain in terms of angular resolution. To ensure the largest spectral overlap between the NIRCam short wavelength channel and NIRSpec, the setup relies on the combination of the NIRSpec G235H disperser and F170LP filter (see Table D.2). Five NIRCam filters cover this wavelength range and fully overlap with G235H/F170LP. Namely, the filters are F182M, F187N, F200W, F210M, and F212N (see Fig. D.1 and Table D.2).

In addition, ensuring that the pair of NIRCam and NIRSpec data to be fused cover the same FoV led us to target JWST science programs with concomitant observations by the two instruments. Among the possible completed observations, we chose two different datasets available through the Mikulski Archive for Space Telescopes (MAST) that are part of two Early Release Science (ERS) and Guaranteed Time Observation (GTO) programs. More precisely, they correspond to the protoplanetary disk d203-506 in the Orion Bar observed within the PDRs4All ERS program (Berné et al. 2022) and to Titan observed within the GTO program entitled ‘Titan climate, composition and clouds’ (Nixon et al. 2025). These datasets have been elected to meet the aforementioned criteria defining symmetric fusion. The two pairs of NIRCam and NIRSpec JWST datasets are depicted in Figs. 2 and 3, and their respective properties are summarised in Table D.1. In the case of d203-506, it is worth noting that only three filters (namely F182M, F187N, and F210M) are available because no observations were planned through the filter F200W, and those provided by the filter F212N were too noisy (Habart et al. 2024). To mitigate the computational burden induced by the fusion algorithm, the two FoVs of interest where the fusion processes have been conducted were carefully limited to the celestial objects of interest. For the d203-506 dataset, we chose a 1″ × 1″ FoV centred on the disk or jet (see Figures 2 and 3). For the Titan dataset, we chose a 1.4″ × 1.4″ FoV covering the satellite (see Figures 2 and 3). Finally, we emphasise that the MAST archive may contain additional datasets that satisfy the criteria required for symmetric fusion.

Thumbnail: Fig. 2. Refer to the following caption and surrounding text. Fig. 2.

Selected JWST NIRCam images. Large field of view NIRCam image (a) of the Orion Bar in the F210M filter. The red square denotes the field of view selected for the d203-506 protoplanetary disk. 1 1× 1″ images (b) of the d203-506 protoplanetary disk observed by NIRCam filters F182M, F187N and F210M. 1.4 × 1.4″ images c of Titan observed by NIRCam filters F182M, F187N, F200W, F210M, and F212N.

Thumbnail: Fig. 3. Refer to the following caption and surrounding text. Fig. 3.

Selected JWST NIRSpec IFU F170LP filter and G235H disperser data. NIRSpec observations of the d203-506 protoplanetary disk at 1.982 and 2.122 μm (a). Spectra from those observations (b) at the positions (red and black points) shown in the images. Titan observed by NIRSpec (c) at 1.982 and 2.069 μm. Spectra from those observations (d) at the positions (red and black points) shown in the images. In the plot of the spectra, the two vertical dotted lines indicate the wavelengths of the images. Original data from the MAST database have been rotated, aligned, and cropped (we refer to this as co-registration) as described in Appendix B.

3.2. Spatial and spectral inspections of the fused cubes

Figure 4 presents spatial and spectral visualisations of the resulting high resolution hyperspectral cubes obtained by the SyFu algorithm, whose parameters and properties are discussed in Appendix C and reported in Table C.1. The fused hyperspectral cubes achieve an angular resolution close to that of NIRCam while providing spectral information over the 1.66 to 2.3 μm wavelength range. For the case of d203-506, at 1.98 μm – corresponding to the Paschen-α line of hydrogen – small-scale structures are well recovered (Fig. 4a). Notably, the dark lane caused by the protoplanetary disk silhouetted against the nebular background emission of the Orion Nebula is clearly visible, as is the bright spot associated with the base of a jet (Berné et al. 2024). At 2.12 μm, which corresponds to the ro-vibrational emission of H2 (Berné et al. 2024), the warm wind enshrouding the disk is recovered as well. In the case of Titan, the fused data cube at 1.98 μm allows for a clear recovery of atmospheric haze and cloud structures (Fig. 4c) (Nixon et al. 2025). At 2.07 μm, the surface of the satellite is distinctly visible, with the Belet region in the southern hemisphere being discernible (Nixon et al. 2025). In Fig. 4b and d, it appears that the spectra are well recovered over the 1.66–2.3 μm wavelength range. In the case of d203-506, the spectrum extracted from the fused cube (red spectrum in Fig. 4b) shows a reduced noise level with respect to the original NIRSpec spectrum (Fig. 3b). This noise reduction effect is less tractable on Titan data (Fig. 4d) because the original NIRSpec cubes (Fig. 3d) have a higher signal-to-noise ratio than the d203-506 data. Complementary experimental results, in particular assessing the consistency of the fused hyperspectral cubes, are reported in Appendix E. For reproducibility, Appendix A contains detailed information supporting the presented results.

Thumbnail: Fig. 4. Refer to the following caption and surrounding text. Fig. 4.

High resolution hyperspectral cubes resulting from JWST data fusion. The d203-506 protoplanetary disk fused hyperspectral cube (a) shown at 1.982 and 2.122 μm. Spectra from this cube (b) at the positions (red and black points) shown in the images. Titan fused hyperspectral cube (c) shown at 1.982 and 2.069 μm. Two spectra extracted from this cube (d) at the positions (red and black points) shown in the images. In the plot of the spectra, the vertical dotted lines indicate the wavelengths of the images.

4. Discussion

The successful data fusion on real astronomical targets presented here opens up new possibilities for analysing astronomical data and designing future observations with JWST and beyond. In the case of JWST, the fused cubes achieve an angular resolution nearly three times higher than the native resolution of NIRSpec. This improvement could have significant implications across several fields of astronomy, as it may, for instance, enable spatially resolving the thousands of emission lines present in the hundreds of irradiated protoplanetary disks in Orion (such as d203-506, presented in Fig. 4), which would offer new insights into their physical structure. These targets have recently garnered considerable interest (Allen et al. 2025; Schroetter et al. 2025) and have already been observed with NIRCam (Berné et al. 2022; McCaughrean & Pearson 2023). Obtaining complementary NIRSpec IFU data would allow a fusion with NIRCam data and ultimately yield unique high-resolution datasets that could greatly enhance our understanding of planetary system formation, as the fusion would allow us to reach astronomical unit scales (the angular resolution of NIRCam corresponds to about 12 AU in Orion).

There are also many promising opportunities for studying planet-forming disks closer to us and for which NIRCam data are already available (e.g. in Taurus L1527; Villenave et al. 2024). In such cases, the improved angular resolution from fusion could enable the identification of spatial structures within disks in specific emission lines.

Nearby galaxies have also been observed with JWST, for instance, as part of the PHANGS program with NIRCam (Lee et al. 2023). Targeted NIRSpec observations of specific regions could enable data fusion to study star-forming regions within these galaxies at unprecedented spatial scales and levels of detail. This could pave the way for fusion experiments on galaxies at greater distances. For instance, fusing NIRSpec and NIRCam observations of deep fields could help resolve individual star-forming regions in high-redshift galaxies (Claeyssens et al. 2023), providing new tools to understand the formation of stellar clusters at cosmic dawn.

A crucial requirement for accomplishing such a fusion is the availability, on the same telescope, of both an imager and spectrograph, as is the case with NIRCam and NIRSpec on JWST. While not unique to JWST, this configuration is still relatively rare. For example, the X-IFU instrument on the future Athena mission could be combined with its imager (Lascar et al. 2025). We advocate for promoting coupled observation modes in future instruments alongside the development of dedicated data fusion algorithms integrated into their pipelines. This practice is already standard in the context of Earth observation.

Although the successful data fusion presented here shows great promise, several improvements are needed to unlock its full potential. One limitation of the current SyFu algorithm is that both the imager and the IFU must observe the same FoV with complete spatial and spectral overlap (i.e. symmetric fusion). A way to expand the applicability of NIRCam–NIRSpec fusion would be to develop non-symmetric fusion methods. This would, for example, enable fusion across the entire NIRSpec wavelength range (0.6–5 μm), despite the three wavelength gaps in the NIRCam data (Fig. D.1).

In this work, we also employed a classical regularisation based on the Sobolev norm. By design, this regularisation smooths the fused hyperspectral cube, which may lead to the loss of fine texture details present in the input image. A more advanced approach would involve weighting the Sobolev regularisation pixel-wise, for instance, by exploiting the input high-resolution image to identify spatial contours that should be preserved and then reducing the smoothing in those regions of the fused cube (Guilloteau et al. 2022). Other alternatives include informing the regularisation using deep generative models, such as patch normalising flows (Altekrüger et al. 2023), to extract local structure from the high-resolution input image and improve the fusion process.

References

  1. Allen, M., Anania, R., Andersen, M., et al. 2025, OJAP, 8 [Google Scholar]
  2. Altekrüger, F., Denker, A., Hagemann, P., et al. 2023, Inverse Probl., 39, 064006 [Google Scholar]
  3. Bacon, R., Brinchmann, J., Conseil, S., et al. 2023, A&A, 670, A4 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  4. Berné, O., Habart, É., Peeters, E., et al. 2022, PASP, 134, 054301 [CrossRef] [Google Scholar]
  5. Berné, O., Habart, E., Peeters, E., et al. 2024, Science, 383, 988 [Google Scholar]
  6. Böker, T., Arribas, S., Lützgendorf, N., et al. 2022, A&A, 661, A82 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  7. Bushouse, H., Eisenhamer, J., Dencheva, N., et al. 2024, https://zenodo.org/records/15632984 [Google Scholar]
  8. Chown, R., Okada, Y., Peeters, E., et al. 2025, A&A, 698, A86 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  9. Claeyssens, A., Adamo, A., Richard, J., et al. 2023, MNRAS, 520, 2180 [NASA ADS] [CrossRef] [Google Scholar]
  10. Donoho, D. L. 1995, IEEE TIT, 41, 613 [Google Scholar]
  11. Gardner, J. P., Mather, J. C., Abbott, R., et al. 2023, PASP, 135, 068001 [NASA ADS] [CrossRef] [Google Scholar]
  12. Giardino, G., Bhatawdekar, R., Birkmann, S. M., et al. 2022, Space Telescopes and Instrumentation 2022: Optical, Infrared, and Millimeter Wave (SPIE), 12180, 382 [Google Scholar]
  13. Gordon, K. D., Bohlin, R., Sloan, G., et al. 2022, AJ, 163, 267 [NASA ADS] [CrossRef] [Google Scholar]
  14. Guilloteau, C., Oberlin, T., Berné, O., & Dobigeon, N. 2020a, IEEE TCI, 6, 1362 [Google Scholar]
  15. Guilloteau, C., Oberlin, T., Berné, O., Habart, É., & Dobigeon, N. 2020b, AJ, 160, 28 [NASA ADS] [CrossRef] [Google Scholar]
  16. Guilloteau, C., Oberlin, T., Berné, O., & Dobigeon, N. 2022, in 2022 IEEE ICIP [Google Scholar]
  17. Guizar-Sicairos, M., Thurman, S. T., & Fienup, J. R. 2008, Opt. Lett., 33, 156 [NASA ADS] [CrossRef] [Google Scholar]
  18. Habart, E., Peeters, E., Berné, O., et al. 2024, A&A, 685, A73 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  19. Hadj-Youcef, M. A., Orieux, F., Fraysse, A., & Abergel, A. 2017, in EUSIPCO (IEEE), 503 [Google Scholar]
  20. Lascar, J., Bobin, J., & Acero, F. 2025, A&A, 694, A34 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  21. Lee, J. C., Sandstrom, K. M., Leroy, A. K., et al. 2023, AJ, 944, L17 [Google Scholar]
  22. McCaughrean, M., & Pearson, S. 2023, A&A, submitted, [arXiv:2310.03552] [Google Scholar]
  23. Nie, L., Shan, H., Li, G., et al. 2024, AJ, 167, 58 [Google Scholar]
  24. Nixon, C. A., Bézard, B., Cornet, T., et al. 2025, NA, 9, 969 [Google Scholar]
  25. Perrin, M. D., Sivaramakrishnan, A., Lajoie, C.-P., et al. 2014, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, (SPIE) 9143, 1174 [Google Scholar]
  26. Pineau, D., Orieux, F., & Abergel, A. 2023, IEEE Whispers (IEEE), 1 [Google Scholar]
  27. Pineau, D., Orieux, F., & Abergel, A. 2025, IEEE TCI, 11, 704 [Google Scholar]
  28. Pontoppidan, K. M., Pickering, T. E., Laidler, V. G., et al. 2016, Observatory Operations: Strategies, Processes, and Systems VI (SPIE), 9910, 381 [Google Scholar]
  29. Rieke, M. J., Kelly, D. M., Misselt, K., et al. 2023, PASP, 135, 028001 [CrossRef] [Google Scholar]
  30. Schroetter, I., Berne, O., Boyden, R., et al. 2025, JWST Proposal. Cycle, 4, 7534 [Google Scholar]
  31. Soulez, F., Thiébaut, É., & Denis, L. 2013, EAS Pub. Ser., 59, 403 [Google Scholar]
  32. STScI 2024a, JWST Data - user documentation [Google Scholar]
  33. STScI 2024b, JWST Near Infrared Camera - user documentation [Google Scholar]
  34. STScI 2024c, JWST Near Infrared Spectrograph - user documentation [Google Scholar]
  35. van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., et al. 2014, PeerJ, 2, e453 [Google Scholar]
  36. Villenave, M., Stapelfeldt, K. R., Duchêne, G., et al. 2024, AJ, 961, 95 [Google Scholar]
  37. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. 2004, IEEE TIP, 13, 600 [Google Scholar]
  38. Wei, Q., Dobigeon, N., & Tourneret, J.-Y. 2015, IEEE TIP, 24, 4109 [Google Scholar]
  39. Yokoya, N., Grohnfeldt, C., & Chanussot, J. 2017, IEEE GRSM, 5, 29 [Google Scholar]

Appendix A: Data availability

Our fusion results will be shared upon reasonable demand. The original NIRCam and NIRSpec data are available on the MAST archive: https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html (proposal 1288 and 1251). NIRCam throughputs are accessible on the STScI website: https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-instrumentation/nircam-filters.

The SyFu algorithm and the code producing the figures of this paper are available at : https://github.com/L4Marquis/SyFu.

Appendix B: Data pre-processing

The SyFu algorithm includes three pre-fusion steps to compute the forward operators, to co-register the pairs of input data and to cross-calibrate the operators. These stages are detailed in the following sections. To lighten the notations, the three-dimensional data cube was unfolded into matrices whose rows (resp. columns) are associated with the spectral (resp. spatial) dimension.

JWST pipeline processing

The datasets for d203-506 and Titan are extracted from the MAST archive, hence they already underwent a complete, up to stage 3, JWST science calibration pipeline processing (Bushouse et al. 2024). This pipeline converts raw observations acquired with an up-the-ramp strategy into science-ready data with common astrophysical units (e.g. MJy/sr). It computes the coordinates, corrects bad pixels and cuts off most of the instrumental noise and cosmic rays present in the data.

Computing the forward operators

The two data-fitting term defining the objective function underlying the optimisation problem (3) are driven by the forward models that relate the fused hyperspectral cube to be recovered and the NIRCam and NIRSpec data. The NIRCam forward model writes

N I R C a m ( X ) = L m W m ( X ) , Mathematical equation: $$ \begin{aligned} \mathsf {NIRCam} (X) = L_{\mathrm{m} }W_{\mathrm{m} }(X), \end{aligned} $$(B.1)

where Lm is the matrix representing the NIRCam throughputs and Wm(⋅) stands for the operator which performs wavelength-wise spatial convolutions with the NIRCam PSFs. Similarly the NIRSpec forward model is decomposed as

N I R S p e c ( X ) = L h W h ( X ) S , Mathematical equation: $$ \begin{aligned} \mathsf {NIRSpec} (X) = L_{\mathrm{h} }W_{\mathrm{h} }(X)S, \end{aligned} $$(B.2)

where Lh and Wh(⋅) denote the NIRSpec throughputs and convolutions by the PSF. The matrix S is a regular sub-sampling matrix that models the difference in resolutions between the NIRCam and NIRSpec data.

Co-registration

The user defines the rectangular fusion FoV by specifying a central point along with its height and width. This definition only requires a full overlap between NIRCam and NIRSpec data, which characterises a symmetric fusion. Within this fusion FoV, the NIRCam images and NIRSpec cubes must be co-registered, which involves both having a common rotation with respect to the North and aligned pixels.

The NIRSpec cubes are resampled in the JWST pipeline so that the pixels are aligned with North. Since the NIRSpec aperture is not aligned with North at the moment of observation, this implies that the FoV is rotated. To obtain a rectangular aperture aligned with North we rotate the NIRSpec cube with the appropriate angle. We then apply a rotation to the NIRCam image, so that it has the same rotation with respect to the North as the NIRSpec cube. The next step consists in aligning the NIRCam images and NIRSpec cube. To do so, two cases need to be considered: non-moving (distant) objects and solar system objects. In the first case of non-moving objects, applicable to d203-506, we rely on the coordinates to align the NIRCam images and the NIRSpec cube. Finally, the NIRCam images are resampled to a pixel size that is one-third of NIRSpec (0.033”). This division by a factor of three is chosen since it is the closest integer to the pixel scale ratio between NIRSpec and NIRCam short wavelength channel (3.2). Having an integer value provides significant simplifications and speeding-up of the algorithmic implementation (Wei et al. 2015). In the second case of moving targets, applicable to Titan, the translations to be operated for the image alignment cannot be deduced from the pointing information. The NIRCam images and NIRSpec cubes are resampled to a common pixel size of one-third of the NIRSpec pixel size (0.033”). Then, using the phase cross-correlation technique (Guizar-Sicairos et al. 2008; van der Walt et al. 2014) we compensate for the translations. We note that all mentioned rotation and re-sampling steps were conducted with bicubic interpolations.

Cross-calibration

The SyFu algorithm requires a precise calibration of both NIRCam and NIRSpec forward operators to ensure the spectral consistency of the fused hyperspectral cube. First, the raw PSFs produced by the WebbPSF simulation tool are normalised such that their spreading patterns sum to one at each wavelength. The NIRSpec throughput is already compensated during the flat field correction in stage 2 of the JWST pipeline (STScI 2024a), so Lh is an identity matrix. Then, the NIRCam throughputs Lm are interpolated over the NIRSpec wavelengths and cross-calibrated as explained below.

The conventional JWST pipeline procedure to calibrate the throughput Lm, f of the f-th NIRCam filter consists in dividing this throughput by the sum of its coefficients, denoted by ∥Lm, f1 (Gordon et al. 2022). However, since the JWST pipeline does not perform cross-calibration, this procedure results in an intensity mismatch between NIRCam data Ym and NIRSpec data Yh (Chown et al. 2025). To mitigate this mismatch, we cross-calibrate the NIRCam throughputs with respect to the input NIRSpec data Yh to ensure that

Y m , f ¯ = L m , f Y h ¯ , Mathematical equation: $$ \begin{aligned} \overline{Y_{\mathrm{m} ,f}} = L_{\mathrm{m} ,f} \overline{Y_{\mathrm{h} }}, \end{aligned} $$(B.3)

for every filter f, where Ym, f denotes the image of the f-th NIRCam filter and · ¯ Mathematical equation: $ \overline{\, \cdot \,} $ denotes the spatial averaging operator. This is done by subsequently multiplying Lm, f by the factor

L m , f 1 × Y m , f ¯ L m , f Y h ¯ , Mathematical equation: $$ \begin{aligned} \Vert L_{\mathrm{m} ,f}\Vert _1 \times \frac{\overline{Y_{\mathrm{m} ,f}}}{L_{\mathrm{m} ,f} \overline{Y_{\mathrm{h} }}} , \end{aligned} $$(B.4)

where the multiplication by ∥Lm, f1 compensates the conventional calibration procedure and the multiplication by the ratio ensures the identity (B.3) holds. The resulting cross-calibration factors, close to 1, are reported for each NIRCam filter in Table C.1.

Appendix C: Fusion as a regularised inverse problem

Once the NIRCam and NIRSpec datasets have been co-registered and cross-calibrated, they follow a fusion process by solving the minimisation problem (3). The strategy adopted to solve this problem is detailed hereafter.

Regularisations

The SyFu algorithm performs symmetric fusion by solving a spectrally and spatially regularised inverse problem. Instead of considering a complex joint spatial-spectral regularisation, the regulariser R(⋅) = Rspec(⋅)+Rspat(⋅) in (3) is split into two terms associated with the spectral regularisation and the spatial regularisation, respectively.

Regarding the spectral regularisation Rspec(⋅), it is implicitly defined by an hard constraint that imposes that the solution X of the minimisation problem lives in the same spectral subspace as the NIRSpec data cube. To identify this subspace we conduct a principal component analysis (PCA) of the NIRSpec data. This procedure involves centring the NIRSpec data by removing the NIRSpec mean spectrum Y h ¯ Mathematical equation: $ \overline{Y_{\mathrm{h}}} $ from each spectra of Yh. A truncated singular value decomposition then provides a matrix V whose columns contain k elementary spectra which span the spectral signal subspace. Then the fused hyperspectral cube is decomposed as

X = V Z + Υ h , Mathematical equation: $$ \begin{aligned} X = VZ + {\Upsilon _{\mathrm{h} }}, \end{aligned} $$(C.1)

where Z is the representation of the solution in the subspace and Υ h = [ Y h ¯ , , Y h ¯ ] Mathematical equation: $ {\Upsilon_{\mathrm{h}}} = \left[\overline{Y_{\mathrm{h}}},\ldots, \overline{Y_{\mathrm{h}}}\right] $ is a matrix whose columns are the mean spectrum Y h ¯ Mathematical equation: $ \overline{Y_{\mathrm{h}}} $ obtained by spatially averaging the NIRSpec input data. Beyond promoting spectrally consistent solutions, another key advantage of adopting the hardcore regularisation (C.1) is that it allows the initial regularised least squares problem (3) to be reformulated with respect to the representation Z of the solution in the subspace, significantly decreasing the computational complexity of the optimisation procedure.

Regarding the spatial regularisation, we ensure the fused hyperspectral cube to be spatially smooth by resorting to a Sobolev regularisation Rspat(⋅) which penalises the gradient of the representation of the fused hyperspectral cube

R spat ( Z ) = μ Z D F 2 , Mathematical equation: $$ \begin{aligned} R_{\mathrm{spat} }(Z) = \mu \Vert ZD\Vert _{\mathrm{F} }^2, \end{aligned} $$

where D is the matrix standing for the first-order finite difference operator and ∥ ⋅ ∥F2 denotes the squared Frobenius norm, that is the sum of the squared coefficients of the matrix, the parameter μ adjusts the weight of the regularisation. Since the columns of V are orthogonal, this choice amounts to defining the spatial regularisation directly on the fused hyperspectral cube X (Guilloteau et al. 2020a).

Numerical resolution

Given the aforementioned forward models and regularisations, the minimisation problem in (3) is rewritten as

min Z Y m L m W m ( V Z + Υ h ) F 2 + γ Y h L h W h ( V Z + Υ h ) S F 2 + μ Z D F 2 . Mathematical equation: $$ \begin{aligned}&\min _Z \Vert Y_{\mathrm{m} } - L_{\mathrm{m} }W_{\mathrm{m} }(VZ + {\Upsilon _{\mathrm{h} }})\Vert _{\mathrm{F} }^2 \nonumber \\&\quad + \gamma \Vert Y_{\mathrm{h} } - L_{\mathrm{h} }W_{\mathrm{h} }(VZ + {\Upsilon _{\mathrm{h} }})S\Vert _{\mathrm{F} }^2 + \mu \Vert ZD\Vert _{\mathrm{F} }^2. \end{aligned} $$(C.2)

In particular, thanks to the linearity of the forward model, the NIRCam data fidelity term can be decomposed as

Y m L m W m ( V Z + Υ h ) F 2 = Y m L m W m ( Υ h ) L m W m ( V Z ) F 2 . Mathematical equation: $$ \begin{aligned} \Vert Y_{\mathrm{m} } - L_{\mathrm{m} }W_{\mathrm{m} }(VZ + {\Upsilon _{\mathrm{h} }})\Vert _{\mathrm{F} }^2 = \Vert Y_{\mathrm{m} } - L_{\mathrm{m} }W_{\mathrm{m} }({\Upsilon _{\mathrm{h} }}) - L_{\mathrm{m} }W_{\mathrm{m} }(VZ)\Vert _{\mathrm{F} }^2. \end{aligned} $$

It is worth noting that, thanks to the cross-calibration and the unit normalisation of the PSFs previously discussed, the identity (B.3) ensures that Ym − LmWm(Υh) have a near zero mean.

The resulting problem (C.2) is smooth and convex and can be solved by any differentiable convex optimisation algorithm. The algorithmic procedure to solve the regularised inverse problem is composed of three successive steps: the computation of the PSF, the computation of a sparse linear system, and finally its resolution.

Computing the PSFs with WebbPSF takes several hours. They are computed once and then stored to avoid unnecessary re-computation. This can be done for each of the four NIRSpec filter and disperser combinations. To further lighten the computational burden, the associated large convolutional operators Wm(⋅) and Wh(⋅) are converted into term-wise multiplications in the Fourier domain. To do so, the associated PSF, as well as the NIRCam and NIRSpec data, are symmetrically padded and evaluated in the Fourier domain using a two-dimensional fast Fourier transform. This formulation enforces periodic boundary conditions, which may result into unwanted boundary effects, for example, when bright sources are present near the edges of the FoV. Denoting by h and w the spatial height and width of the NIRSpec cube Yh, the size of the padded NIRSpec cube and PSF is 3h × 3w, while it is 9h × 9w for NIRCam images and PSF. This choice preserves the cross-calibration, ensuring that both the original and padded data and PSF have the same mean.

Then, all quantities defined in (C.2) are vectorised adopting a sparse matrix representation. By setting the gradient of the resulting objective function to zero, the linear system to be solved is characterised by a large yet structured and highly sparse matrix that can be conveniently represented using the compressed sparse row format (CSR) of the Python SciPy library. Thanks to the highly sparse structure of the linear problem, this vectorisation yields a computational complexity of 𝒪(pmk2) for each iteration of the subsequent iterative algorithm, where pm stands for the number of NIRCam pixels (Guilloteau et al. 2020a). The matrices associated with this sparse linear system, whose expressions are detailed in (Guilloteau et al. 2020a), are computed once and stored for subsequent use. This computation takes less than one minute on a laptop when using a reasonably small subspace (here k = 3, see Table C.1).

Table C.1.

Parameters and properties of the SyFu algorithm.

Finally, the linear system is solved using the conjugate gradient descent implemented in the scipy.sparse.linalg library, which eases and fastens the tuning of the parameters μ and γ (see Table C.1). By initialising the algorithm with an interpolated counterpart of the NIRSpec data Yh, 1000 iterations of the algorithm are required to obtain satisfactory results (see also the code publicly available, as detailed in Appendix A).

Tuning of the algorithmic parameters

The resolution of the regularised inverse problems is governed by several parameters, namely the dimension k of the spectral signal subspace, the parameter γ which balances the NIRCam and NIRSpec data fitting terms and the parameter μ which adjusts the weight of the Sobolev regularisation. These parameters can be tuned manually to optimise the results or automatically for non-expert users.

Manual tuning – The parameter k, which corresponds to the number of principal components to be kept when conducing the PCA, is adjusted to preserve sufficient information in the NIRSpec data. Beyond its impact on the spectral regularisation, it is worth noting that this parameter has also an impact on the dimension of the linear system to be solved. Thus, choosing a small k reduces drastically the computational time required in the iterative minimisation procedure. Regarding the parameter γ, it has been manually adjusted to reach a trade-off between the two data fitting terms. Finally, the weight of the regularisation μ is adjusted by performing a grid search with a few values between 1 and 10−4 and choosing the best one through visual inspection. Table C.1 summarises the values of these parameters manually adjusted for the two datasets.

Automatic tuning – Techniques commonly adopted in image processing can be devised to automatically guide the selection of the hyperparameters k, γ and μ. One strategy consists in first estimating the noise levels σm and σh in the NIRCam and NIRSpec data, for example, using the robust estimator proposed by Donoho (1995). Then, when implementing the hard spectral regularisation, only the principal components with corresponding eigenvalues higher than σh2 should be kept in V, explicitly adjusting the number k. Finally, a straightforward interpretation of the two other hyperparameters can be offered by adopting an empirical Bayesian formulation of the regularised least-square problem (C.2). This interpretation leads to γ = σm2/σh2 and μ = σm2/α2 where α2 is the mean of the squared Frobenius norm of ZD, that can be empirically estimated on a crude solution Z ̂ Mathematical equation: $ \hat{Z} $, for example, computed from an interpolated counterpart of the NIRSpec data. This strategy relies on the Gaussian assumption of the residual noises, which is not realistic because of the complex nature of the JWST NIRCam and NIRSpec data. Yet, this simple and fast strategy yields results similar to those reported in the paper.

Appendix D: Data properties and range of symmetric fusion

The NIRCam and NIRSpec data extracted from the Barbara Ann Mikulski Archive for Space Telescope (MAST) database are fused on the 3" x 3" FoV of NIRSpec (Table D.1). The wavelength ranges on which a symmetric fusion can be performed are limited by:

  1. the presence of three wavelength gaps in NIRSpec high resolution gratings (STScI 2024c) (see Fig. D.1),

  2. the range of NIRCam filters, whose spectral information must be fully contained in NIRSpec throughputs.

Thumbnail: Fig. D.1. Refer to the following caption and surrounding text. Fig. D.1.

NIRCam and NIRSpec throughputs available for a symmetric fusion. The throughputs are sorted on four categories (from top to bottom): NIRSpec high resolution disperser/filter combinations, NIRCam wide (W) filters, NIRCam medium (M) filters and NIRCam narrow (N) filters. The three wavelength gaps (1.40780 - 1.4858 μm, 2.36067 - 2.49153 μm and 3.98276 - 4.20323 μm) of NIRSpec IFU, where the information is only partial, are limited by the grey dashed lines. NIRCam filters covering those gaps are not displayed.

Table D.1.

Main properties of d203-506 and Titan data.

From those constraints we extracted the six different wavelength ranges available for symmetric fusion (see Table D.2).

Table D.2.

Combination of NIRCam filters and NIRSpec disperser/filter available for symmetric fusion.

Appendix E: Consistency of the fused hyperspectral cubes

E.1. Measures of quality

The fused hyperspectral cubes (Fig. 4) are spectrally degraded using the NIRCam forward model in equation (B.1), producing F images denoted by Xm, f for f ∈ [[1; F]]. We then compare these reconstructed images with the NIRCam measurements Ym, f, by means of two metrics widely used in image processing. The first one, denoted by peak signal-to-noise ratio (PS/N), computes a normalised relative error between images X and Y:

PS / N ( Y , X ) = 10 log 10 max ( Y ) 2 MSE ( Y , X ) , Mathematical equation: $$ \begin{aligned} \mathrm{PS/N} (Y,X) = 10 \mathrm{log} _{\rm 10}\frac{\max {(Y)}^2}{\mathrm{MSE} (Y, X)}, \end{aligned} $$(E.1)

where MSE(⋅, ⋅) denotes the mean squared error. The better the reconstruction, the higher the PS/N. In addition, we compute the structural similarity index (Wang et al. 2004), which is a local measure of similarity between two image patches X and Y, designed to match human perception:

SSIM ( Y , X ) = ( 2 Y ¯ × X ¯ + c 1 ) ( 2 σ ( Y , X ) + c 2 ) ) ( Y ¯ 2 × X ¯ 2 + c 1 ) ( σ ( Y ) 2 + σ ( X ) 2 + c 2 ) , Mathematical equation: $$ \begin{aligned} \mathrm{SSIM} (Y, X) = \frac{(2 \overline{Y} \times \overline{X}+c_1)(2\sigma (Y, X)+c_2))}{(\overline{Y}^2 \times \overline{X}^2+c_1)(\sigma (Y)^2+\sigma (X)^2+c_2)}, \end{aligned} $$(E.2)

where σ(⋅, ⋅) is the covariance, σ(⋅) is the standard deviation, c1 = (0.01(max(Y)−min(Y))2 and c2 = (0.03(max(Y)−min(Y))2 are two small variables stabilising the division (Wang et al. 2004). The SSIM is computed locally on patches of size 7 × 7, and then averaged over all patches in the image. The better the reconstruction, the higher the SSIM with a maximum of 1. The numerical values are reported in Table E.1.

Table E.1.

Measures of the fidelity of the fused hyperspectral cubes with NIRCam images.

E.2. Consistency with the input

The consistency of the SyFu method can be evaluated by conducting a sanity check which assesses the spatial and spectral consistency of marginal products derived from the fused hyperspectral cubes with respect to the input NIRCam and NIRSpec data. More precisely, we first integrate the fused hyperspectral cubes over the throughputs of the NIRCam filters used for fusion. These images extracted from the fused cube are compared to the corresponding input NIRCam images in Fig. E.1. The extracted images match very well the NIRCam images. This is also supported by the results on Table E.1, where all PS/N values are ≳ 30 dB (resp ≳ 40 dB) and all SSIM values are > 0.9 (resp. ≳ 0.99) for d203-506 data (resp. Titan data), which is recognised as a good pixel-wise and perceptual similarity in the image processing literature.

Thumbnail: Fig. E.1. Refer to the following caption and surrounding text. Fig. E.1.

Comparison between JWST NIRCam images and images from the fused data cube. NIRCam images of the d203-506 protoplanetary disk a and Titan b. Images extracted from the fused cube by integrating over the NIRCam throughputs for d203-506 c and Titan d.

Second, we average the fused hyperspectral cubes over the two spatial dimensions. The resulting mean spectra are compared to the spatially averaged NIRSpec spectra in Fig. E.2. These two spectra are nearly undistinguishable. To quantify the differences, we compute the relative error (RE) between the fused image X and the NIRSpec input data Yh, defined as ( X ¯ Y h ¯ ) / Y h ¯ Mathematical equation: $ (\overline{{X}} - \overline{Y_{\mathrm{h}}}) / {\overline{Y_{\mathrm{h}}}} $ where · ¯ Mathematical equation: $ \bar{\cdot} $ stands for the spatial averaging operator. These relative errors associated with the two datasets are depicted in Fig. E.2 as functions of the wavelength. Over the considered spectral range, they are lower than two percent for d203-506 data and even less than 0.2 percent for Titan data.

Thumbnail: Fig. E.2. Refer to the following caption and surrounding text. Fig. E.2.

Comparison between the JWST NIRSpec and the fused data cube averaged spectra. a, d203-506 protoplanetary disk averaged NIRSpec spectrum (orange) and fused hyperspectral cube averaged spectra (dark blue). Relative error (in percent) between them is shown below, in blue. b : Same as a, but for Titan.

Overall, Fig. E.1 and E.2 demonstrate that the fused hyperspectral cubes produced by the proposed SyFu algorithm are highly consistent with both NIRCam and NIRSpec observations. We show that more sophisticated validation tests are also satisfied in the following.

E.3. Additional validation tests

To further assess the consistency of the data fusion method, we considered the previously described Titan dataset and rerun the fusion, but excluding NIRCam data for filters F187N and F212N. Figure E.3a, b shows the marginal products derived from the resulting fused hyperspectral cube (as in Fig. E.1 and E.2) which demonstrate it pass the sanity check described in the Results section. We further validate the consistency of the fused hyperspectral cube in Fig. E.4 where the unused F187N and F212N images are compared with the fused hyperspectral cube integrated over their respective throughputs. From a visual inspection, those images match each other closely, which is supported by respective PS/N values of 34.03 and 28.43 (see definition below). To conclude, the SyFu algorithm can generate hyperspectral cubes that are fully consistent even with NIRCam observations excluded from the fusion.

Thumbnail: Fig. E.3. Refer to the following caption and surrounding text. Fig. E.3.

Comparison between the JWST NIRCam and NIRSpec Titan data and the hyperspectral cube resulting from their fusion. a: Images of Titan observed by NIRCam filters F182M, F200W and F210M (top) and the Titan fused hyperspectral cube integrated over the corresponding throughputs (bottom). b, NIRSpec averaged spectrum (orange), the fused hyperspectral cube average spectrum (dark blue). Relative error (in percent) between them is shown below, in blue.

Thumbnail: Fig. E.4. Refer to the following caption and surrounding text. Fig. E.4.

The two NIRCam Titan images excluded from data fusion (top) and the fused hyperspectral cube integrated over their respective throughputs (bottom).

All Tables

Table C.1.

Parameters and properties of the SyFu algorithm.

Table D.1.

Main properties of d203-506 and Titan data.

Table D.2.

Combination of NIRCam filters and NIRSpec disperser/filter available for symmetric fusion.

Table E.1.

Measures of the fidelity of the fused hyperspectral cubes with NIRCam images.

All Figures

Thumbnail: Fig. 1. Refer to the following caption and surrounding text. Fig. 1.

Schematic illustration of astronomical data fusion. A high spatial resolution multispectral image (a) is fused with a high spatial resolution hyperspectral image (b) to produce a high spatial and spectral resolution hyperspectral cube (c). Credits: ESA/Hubble, NASA.

In the text
Thumbnail: Fig. 2. Refer to the following caption and surrounding text. Fig. 2.

Selected JWST NIRCam images. Large field of view NIRCam image (a) of the Orion Bar in the F210M filter. The red square denotes the field of view selected for the d203-506 protoplanetary disk. 1 1× 1″ images (b) of the d203-506 protoplanetary disk observed by NIRCam filters F182M, F187N and F210M. 1.4 × 1.4″ images c of Titan observed by NIRCam filters F182M, F187N, F200W, F210M, and F212N.

In the text
Thumbnail: Fig. 3. Refer to the following caption and surrounding text. Fig. 3.

Selected JWST NIRSpec IFU F170LP filter and G235H disperser data. NIRSpec observations of the d203-506 protoplanetary disk at 1.982 and 2.122 μm (a). Spectra from those observations (b) at the positions (red and black points) shown in the images. Titan observed by NIRSpec (c) at 1.982 and 2.069 μm. Spectra from those observations (d) at the positions (red and black points) shown in the images. In the plot of the spectra, the two vertical dotted lines indicate the wavelengths of the images. Original data from the MAST database have been rotated, aligned, and cropped (we refer to this as co-registration) as described in Appendix B.

In the text
Thumbnail: Fig. 4. Refer to the following caption and surrounding text. Fig. 4.

High resolution hyperspectral cubes resulting from JWST data fusion. The d203-506 protoplanetary disk fused hyperspectral cube (a) shown at 1.982 and 2.122 μm. Spectra from this cube (b) at the positions (red and black points) shown in the images. Titan fused hyperspectral cube (c) shown at 1.982 and 2.069 μm. Two spectra extracted from this cube (d) at the positions (red and black points) shown in the images. In the plot of the spectra, the vertical dotted lines indicate the wavelengths of the images.

In the text
Thumbnail: Fig. D.1. Refer to the following caption and surrounding text. Fig. D.1.

NIRCam and NIRSpec throughputs available for a symmetric fusion. The throughputs are sorted on four categories (from top to bottom): NIRSpec high resolution disperser/filter combinations, NIRCam wide (W) filters, NIRCam medium (M) filters and NIRCam narrow (N) filters. The three wavelength gaps (1.40780 - 1.4858 μm, 2.36067 - 2.49153 μm and 3.98276 - 4.20323 μm) of NIRSpec IFU, where the information is only partial, are limited by the grey dashed lines. NIRCam filters covering those gaps are not displayed.

In the text
Thumbnail: Fig. E.1. Refer to the following caption and surrounding text. Fig. E.1.

Comparison between JWST NIRCam images and images from the fused data cube. NIRCam images of the d203-506 protoplanetary disk a and Titan b. Images extracted from the fused cube by integrating over the NIRCam throughputs for d203-506 c and Titan d.

In the text
Thumbnail: Fig. E.2. Refer to the following caption and surrounding text. Fig. E.2.

Comparison between the JWST NIRSpec and the fused data cube averaged spectra. a, d203-506 protoplanetary disk averaged NIRSpec spectrum (orange) and fused hyperspectral cube averaged spectra (dark blue). Relative error (in percent) between them is shown below, in blue. b : Same as a, but for Titan.

In the text
Thumbnail: Fig. E.3. Refer to the following caption and surrounding text. Fig. E.3.

Comparison between the JWST NIRCam and NIRSpec Titan data and the hyperspectral cube resulting from their fusion. a: Images of Titan observed by NIRCam filters F182M, F200W and F210M (top) and the Titan fused hyperspectral cube integrated over the corresponding throughputs (bottom). b, NIRSpec averaged spectrum (orange), the fused hyperspectral cube average spectrum (dark blue). Relative error (in percent) between them is shown below, in blue.

In the text
Thumbnail: Fig. E.4. Refer to the following caption and surrounding text. Fig. E.4.

The two NIRCam Titan images excluded from data fusion (top) and the fused hyperspectral cube integrated over their respective throughputs (bottom).

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.