Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Other dataset testing #4

Closed
nanfangzhe opened this issue Mar 5, 2025 · 2 comments
Closed

Other dataset testing #4

nanfangzhe opened this issue Mar 5, 2025 · 2 comments

Comments

@nanfangzhe
Copy link

Hello ,may I ask how to test and verify other data sets?

@jbehley
Copy link
Member

jbehley commented Mar 5, 2025

The whole design of the dataset is that the unknowns are really unknown, such that it's not possible to "cheat" by looking at the unknown instances and simply tune the approach towards getting these instances right.

As we understand that some hyperparameter tuning or maybe ablation studies are needed, we decided to have to Codalab tracks for submissions: a validation track with max 100 submissions and a test track with max. 10 submissions. You can find the Codalab challenge at https://codalab.lisn.upsaclay.fr/competitions/2183

As with SemanticKITTI, validation is performed on Sequence 08 and the test is performed on Sequences 10-21.

For the submissions files, we use the SemanticKITTI format, where you have to provide instance ids for all identified instances. @nuneslu can maybe point you to the location in the implementation where these are generated as reference.

@nuneslu
Copy link
Collaborator

nuneslu commented Mar 10, 2025

To convert the predictions into the competition format you can use this script

@nuneslu nuneslu closed this as completed Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants