Skip to content

Improving and extending benchmarks #103

@bytesnake

Description

@bytesnake

One area where we are lacking right now is the benchmarking coverage. I would like to improve that in the coming weeks.

Infrastructure for benchmarking

Benchmarks are an essential part of linfa. They should give feedback for contributors on their implementations and users confidence that we're doing good work. In order to automate the process we have to employ an CI system which creates a benchmark report on (a) PR (b) commits to master branch. This is difficult with wall-clock benchmarks (aka criterio.rs) but possible with valgrind.

  • use iai for benchmarking
  • add a workflow executing the benchmark on PR/commits to master and create reports in JSON format
  • build a script parsing reports and posting it as comments to PR (see here)
  • add a page to the website which displays reports in a human-readable way
  • (pro) use polynomial regression to find influence of predictors (e.g. #weights, #features, #samples, etc.) to targets (e.g. L1 cache misses, cycles etc.) and post the algorithmic complexity as well

Metadata

Metadata

Assignees

No one assigned

    Labels

    infrastructureGeneral tasks effecting all implementations

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions