June 21, 2021, midnight UTC
Sept. 5, 2021, midnight UTC
You need to provide a UTF-8 encoded textfile for the submission which encodes a table of 100 rows and 2 columns whitespace separated. These files can easily be generated or exported from spreadsheets or array-like data, for example Python numpy array using numpy.savetxt(...)
.
The row index of your submission file corresponds to the index of the debris of the deb_test/
folder. For each debris, you have to give the parent object id (the satellite idea referring to the folder sat/
) and the effective area over mass ratio \(C_r(A/m)\).
The score of your submission is determined by two factors, \(G_1\) and \(G_2\). \(G_1\) is the fraction of wrongly identified parent objects: $$ G_1 = \frac{n_w}{N} $$ where \(n_w\) is the number of wrongly identified parent objects and \(N\) the total number of debris. If all parent objects are identified correctly, \(G_1\) will be \(0\).
The second factor \(G_2\) corresponds to the mean square error with respect to the effective area over mass ratio \(C_r(A/m)\):
$$ G_2 = \frac{1}{N} \cdot \sum\limits_{i = 1}^{N} (r_{i, deb} - r_{i, sub})^2 $$
where \(r_{i,deb}, r_{i,sub}\) are the area to mass ratios of the true debris \(i\) and your submission respectively. Also \(G_2\) will be \(0\) if all ratios are identified perfectly.
The final score \(G\) is now determined as:
$$ G = (G_1 + 1)(G_2 + 1) - 1 $$
retaining the value of \(0\) for a perfect submission.
To avoid leaking information to participants who might try to probe our evaluation data by multiple submissions, the scoring during the competition will only be computed on a subset of 50 out of the 100 test debris. After the competition has concluded, the submission file that achieved the highest score on those 50 debris from each team is reevaluated on the full set of 100 debris, to determine the final score and ranking.