Skip to content

eval_once feature#40

Open
ATrackerLearner wants to merge 1 commit intoJonathonLuiten:masterfrom
ATrackerLearner:master
Open

eval_once feature#40
ATrackerLearner wants to merge 1 commit intoJonathonLuiten:masterfrom
ATrackerLearner:master

Conversation

@ATrackerLearner
Copy link

eval_once feature

eval_once.py is a standalone scripts that allows you to use trackeval module in order to eval few files quickly without implementing all directories for the test. Users using this script must have write access to the local directory in order to execute eval_once since the script creates any directories needed


Recommended usage of eval_once

  • Compare few trackers' performances on the same video sequence
  • Compare tracker's performances on few video sequence

Not recommended usage of eval_once

  • Compare a large amount of trackers
  • Compare trackers' performances on multiple benchmarks

How it works

User side

eval_once has 3 input args :

  • dataset (str): dataset string input. Must be one of the following: KITTI_2D_BOX, KITTI_MOTS, MOT_CHALLENGE_2D, MOTS_CHALLENGE, BDD_100K, DAVIS, TAO, YOUTUBE_VIS.

  • metric_list (List[str]): A list of desired strings' metrics to eval. String's metric should be included into the following set of metrics : HOTA, CLEAR, IDENTITY, COUNT, JANDF, TRACKMAP, VACE.

  • pair_path_list (List[List[str, str]]): A list of pair of paths (ground truth, tracker result). A pair is a list of two strings. Within each pair, tracker result will be evaluated by trackeval by comparing to corresponding ground truth file.

Execution example

An example would be like this : we evaluate one tracker with HOTA, CLEAR and IDF1 metrics from 3 different sequences.

from trackeval import eval_once
eval_once(
    "MOT_CHALLENGE_2D",
    ["HOTA", "CLEAR", "IDENTITY"],
    [
        [
            "/path/to/sequence_gt_1.txt",
            "/path/to/tracker_result_1.txt"
        ],
        [
            "/path/to/sequence_gt_1.txt",
            "/path/to/tracker_result_2.txt"
        ],
        [
            "/path/to/sequence_gt_1.txt",
            "/path/to/tracker_result_3.txt"
        ]
    ]

)

In this example, files are in .txt extension, you can replace it by any required extension.

Script Execution

In order to test this feature, I added a run_eval_once.py script in scripts folder. The script run every metrics for every dataset format for few files for each. To run it you need to have the data-test.zip. This is a light version of the original data folder provided by the repo.

Currently there are some problems with JAndF metric and MOTS format (warning), also TrackMap metric with YouTube_VIS (error). I do not think it's due to eval_once because JAndF works on DAVIS format and TrackMap works on TAO format.

Also, in order to run the script you need to have all packages included in requirements.txt, expect pytest. Also numpy >= 1.20.1 since I got some problems with version below to execute existing scripts / evaluation (see PR #38). Basically this script is not meant to stay on the repo, I wrote it to test eval_once myself and provide a quick way of testing for other.

eval_once side

Generate directories and files

eval_once is just a convenient way to make the necessary files. This hierarchy might be cumbersome for few files or in a production chain. eval_once has two advantages : First one line evaluation, second all files and folders are removed right after.

Into details we have a custom eval_config dictionnary and dataset_config dictionnary. The eval_config dictionnary is the same in any case. The dataset_config dictionnary is set to match default dataset_config from _BaseDataset objects from trackeval.datasets. Then, the script creates everything that needs to exist to build the data folder. Finally, the environnement is evaluated by trackeval.

A cleaner way?

The current implementation is okay according to user side. It has a major drawback: it needs write access to current directories, also it calls the OS for writing / editing / removing files. A better way would be to implement eval_once at a deeper level. I mean, directly run the evaluation function without the needs of making folder. It would be faster (no I/O) and mucher cleaner. I do not know if it's possible with current trackeval, also I cannot look at this soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant