Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[?] Is a threshold applied to predictions in evaluation? #20

Open
rizavelioglu opened this issue Oct 10, 2023 · 0 comments
Open

[?] Is a threshold applied to predictions in evaluation? #20

rizavelioglu opened this issue Oct 10, 2023 · 0 comments

Comments

@rizavelioglu
Copy link

rizavelioglu commented Oct 10, 2023

The model makes 100 predictions per sample. For some settings, e.g. a single object in an image, 100 predictions could yield a lot of wrong predictions, possibly with low confidence.
Therefore, is a threshold applied to the model's output, for example, removing any prediction with a confidence score of less than 0.3 (like the argument score-thr used in demo code)?
More specifically, are the provided results in Tables 2 and 3 based on the thresholded output or the raw output (100 predictions)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant