Open Access

Table 5

SDC2 scores for source catalogs from different teams and methods.

Team/method/model Ms (score) Ndet Nmatch Nfalse Purity s¯Mathematical equation: $\[\bar{s}\]$
Post-challenge results

YOLO-CIANNA v1.0
     - BASE 24 664 36 360 33 404 2962 91.97% 0.8269
        ↪ purity threshold 18 459 22 126 21 927 199 99.10% 0.8509

     - BT1 25 453 36 971 34 115 2862 92.28% 0.8298
        ↪ purity threshold 19 631 23 505 23 294 212 99.10% 0.8518

     - BT2 25 392 36 438 33 785 2658 92.72% 0.8301
        ↪ purity threshold 19 484 23 354 23 135 220 99.06% 0.8517

     - BT3 25 387 36 571 33 791 2786 92.40% 0.8336
        ↪ purity threshold 19 072 22 752 22 543 209 99.08% 0.8553

Challenge results (Ms > 10 000)

MINERVA v0.1* 23 254 32 652 30 841 1811 94.5% 0.81
FORSKA-Sweden 22 489 33 294 31 507 1787 94.6% 0.77
Team SOFIA 16 822 24 923 23 486 1437 94.2% 0.78
NAOC-Tianlai 14 416 29 151 26 020 3131 89.3% 0.67
HI-FRIENDS 13 903 21 903 20 828 1075 95.1% 0.72
... ... ... ... ... ... ...

Notes. The bold elements highlight the optimized metric for each result. *Combination of the catalogs obtained with a prototype version of YOLO-CIANNA 3D and CHADHOC (Hartley et al. 2023).

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.