Skip to content

Official Review #1

@micronet-challenge-submissions

Description

Hello! Thanks so much for your entry! We've successfully evaluated your checkpoint and the quality checks out! And we'd like to say that we greatly appreciate the organization and quality of the code.

One question on your quantization scoring: In your report you say that you count the additions and multiplications separately, but in flop_counter.py it looks like you sum them together and scale both by the reduced precision factor:

Linear/Conv Counting:
https://github.com/yashbhalgat/QualcommAI-MicroNet-submission-MixNet/blob/master/lsq_quantizer/flops_counter.py#L286

https://github.com/yashbhalgat/QualcommAI-MicroNet-submission-MixNet/blob/master/lsq_quantizer/flops_counter.py#L346

Quantization Scaling:

mod_flops = module.__flops__*max(w_str[quant_idx], a_str[quant_idx])/32.0

Am I understanding this correct? It looks like you're properly rounding the weights and activations prior to each linear operation during evaluation, but the additions in these kernels should be counted as FP32 unless I'm missing something.

Trevor

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions