Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
eval_once feature
eval_once.py is a standalone scripts that allows you to use trackeval module in order to eval few files quickly without implementing all directories for the test. Users using this script must have write access to the local directory in order to execute
eval_oncesince the script creates any directories neededRecommended usage of
eval_onceNot recommended usage of
eval_onceHow it works
User side
eval_once has 3 input args :
dataset (str): dataset string input. Must be one of the following: KITTI_2D_BOX, KITTI_MOTS, MOT_CHALLENGE_2D, MOTS_CHALLENGE, BDD_100K, DAVIS, TAO, YOUTUBE_VIS.
metric_list (List[str]): A list of desired strings' metrics to eval. String's metric should be included into the following set of metrics : HOTA, CLEAR, IDENTITY, COUNT, JANDF, TRACKMAP, VACE.
pair_path_list (List[List[str, str]]): A list of pair of paths (ground truth, tracker result). A pair is a list of two strings. Within each pair, tracker result will be evaluated by trackeval by comparing to corresponding ground truth file.
Execution example
An example would be like this : we evaluate one tracker with HOTA, CLEAR and IDF1 metrics from 3 different sequences.
In this example, files are in
.txtextension, you can replace it by any required extension.Script Execution
In order to test this feature, I added a
run_eval_once.pyscript inscriptsfolder. The script run every metrics for every dataset format for few files for each. To run it you need to have the data-test.zip. This is a light version of the original data folder provided by the repo.Currently there are some problems with JAndF metric and MOTS format (warning), also TrackMap metric with YouTube_VIS (error). I do not think it's due to
eval_oncebecause JAndF works on DAVIS format and TrackMap works on TAO format.Also, in order to run the script you need to have all packages included in requirements.txt, expect pytest. Also numpy >= 1.20.1 since I got some problems with version below to execute existing scripts / evaluation (see PR #38). Basically this script is not meant to stay on the repo, I wrote it to test
eval_oncemyself and provide a quick way of testing for other.eval_oncesideGenerate directories and files
eval_onceis just a convenient way to make the necessary files. This hierarchy might be cumbersome for few files or in a production chain.eval_oncehas two advantages : First one line evaluation, second all files and folders are removed right after.Into details we have a custom
eval_configdictionnary anddataset_configdictionnary. Theeval_configdictionnary is the same in any case. Thedataset_configdictionnary is set to match defaultdataset_configfrom_BaseDatasetobjects fromtrackeval.datasets. Then, the script creates everything that needs to exist to build thedatafolder. Finally, the environnement is evaluated bytrackeval.A cleaner way?
The current implementation is okay according to user side. It has a major drawback: it needs write access to current directories, also it calls the OS for writing / editing / removing files. A better way would be to implement
eval_onceat a deeper level. I mean, directly run the evaluation function without the needs of making folder. It would be faster (no I/O) and mucher cleaner. I do not know if it's possible with currenttrackeval, also I cannot look at this soon.