Conversation
- Add Enzyme.jl dependency - Implement compute_grads() using Enzyme.autodiff with runtime activity mode - Add train/test mode switching to handle BatchNorm mutation issues - Refactor mlogloss to use direct indexing instead of onehotbatch - Configure Enzyme strictAliasing in module __init__ Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
…m Reactant cache Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
…rnal onehotbatch calls. Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> fix dimension flow Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com>
* Replace Zygote with Enzyme for gradient computation - Add Enzyme.jl dependency - Implement compute_grads() using Enzyme.autodiff with runtime activity mode - Add train/test mode switching to handle BatchNorm mutation issues - Refactor mlogloss to use direct indexing instead of onehotbatch - Configure Enzyme strictAliasing in module __init__ Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert back loss logic Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * up * up * use reactant and flux approach Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * formatting Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert changes Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * add back imports Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * update callback.jl Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert back NAdam and remove CUDA dependency Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * Compile gradient function once in init(), reuse across all epochs Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * remove CuDNN dependency Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * refactor fit.jl Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * Add _sync_to_cpu! to MLJ interface to synchronize trained weights from Reactant cache Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * Update mlogloss to accept pre-encoded one-hot matrices, removing internal onehotbatch calls. Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * make track_stats false Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * update benchmark file Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * simplify cpu sync logic Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * clean MLJ.jl Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix imports Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix inference, callbacks, and classification Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * simplify get_device Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * model cleanup Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix imports Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix imports Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * add gpu inference support Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert test Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * cleanup Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * merge conflicts Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * merge conflicts Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * Switch to lazy data loading to reduce memory usage and replace custom with native for cleaner residual stacking. Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert Project.toml dependency Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert comments Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert y_pred to p Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * operate callback directly on active object Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * cleanup callback.jl Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * handle MLogLoss with Lux.CrossEntropyLoss Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * use train state for metrics Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix data loader Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix fit function arguments Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix dimension flow Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> fix dimension flow Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * file changes cleanup Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * fix array typing Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * sync * up * up * apply partial=false and fix infer Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * revert project.toml Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> * up * up * bump version --------- Signed-off-by: AdityaPandeyCN <adityapand3y666@gmail.com> Co-authored-by: AdityaPandeyCN <adityapand3y666@gmail.com>
Member
Author
|
@AdityaPandeyCN I've just merged the
|
|
@jeremiedb Let me have a look at it again, I was able to run this locally on gpu, but may have missed committing some part of the code. |
|
Hello @jeremiedb I have raised a PR(#27) with the fix for the above problems:-
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Migration from v0.3 to v0.4 by moving from Flux/Zygote to Lux/Reactant