-
Notifications
You must be signed in to change notification settings - Fork 6
Official Review #1
Description
Hello! Thanks so much for your entry! We've successfully evaluated your checkpoint and the quality checks out! And we'd like to say that we greatly appreciate the organization and quality of the code.
One question on your quantization scoring: In your report you say that you count the additions and multiplications separately, but in flop_counter.py it looks like you sum them together and scale both by the reduced precision factor:
Linear/Conv Counting:
https://github.com/yashbhalgat/QualcommAI-MicroNet-submission-MixNet/blob/master/lsq_quantizer/flops_counter.py#L286
Quantization Scaling:
| mod_flops = module.__flops__*max(w_str[quant_idx], a_str[quant_idx])/32.0 |
Am I understanding this correct? It looks like you're properly rounding the weights and activations prior to each linear operation during evaluation, but the additions in these kernels should be counted as FP32 unless I'm missing something.
Trevor