Skip to content

Conversation

@NicolasNoya
Copy link

Add MemStream: Memory-Based Streaming Anomaly Detection

This PR introduces MemStream, a state-of-the-art online anomaly detection framework designed for high-dimensional data streams with concept drift, based on the paper "MemStream: Memory-Based Streaming Anomaly Detection" by Bhatia et al.

What's New

Core Implementation

  • MemStream (Base Class): Abstract base class providing the core framework for memory-based anomaly detection
  • MemStreamPCA: Concrete implementation using PCA-based feature encoding

Architecture

The implementation consists of two main components:

  1. Feature Encoder: Transforms high-dimensional inputs into lower-dimensional representations

    • Currently implements PCA-based projection
    • Extensible design allows for future encoders (denoising autoencoders and information bottleneck, not implemented due to compatibility issues)
  2. Memory Module: Maintains a dynamic collection of encoded "normal" data representations

    • Adapts to concept drift without explicit labels
    • Configurable replacement strategies: FIFO, LRU, and Random
    • Prevents memory poisoning from anomalous samples

Key Features

  • Online Learning: Processes data points one at a time
  • Unsupervised Detection: No labels required during inference (optional during training)
  • Concept Drift Adaptation: Memory evolves over time to handle distribution changes
  • Flexible Scoring: Uses k-nearest neighbors with exponential weighting to compute anomaly scores
  • Grace Period: Collects initial samples to bootstrap the encoder before scoring begins
  • Memory Management: Configurable size and replacement policies

Parameters

  • memory_size: Maximum number of encoded normal samples to store (default: 1,000 for PCA variant)
  • max_threshold: Threshold for accepting samples into memory (default: 0.1)
  • grace_period: Number of initial samples before scoring begins (default: 5,000)
  • n_components: Number of PCA components (default: 20) (coded to take the value that makes PCA possible if n_components is inappropriate)
  • k: Number of nearest neighbors for scoring (default: 5)
  • gamma: Exponential weighting factor (default: 0.1)
  • replace_strategy: Memory replacement policy (FIFO, LRU, or RANDOM)

@kulbachcedric
Copy link
Contributor

Hi @NicolasNoya,

that's a nice contribution!
However, I think the setup.py creates some issues here.
Can we remove it?

Best
Cedric

@NicolasNoya
Copy link
Author

Hello @kulbachcedric,

Sorry for the mess. I checked that you've been changing the code, and I rebased it to check if everything worked on the new version.

I do have one question: would you prefer to keep the abstract class MemStream with the MemStreamPCA subclass, or should we keep only MemStreamPCA? Since the autoencoder class is no longer present, it might be cleaner to keep a single class.

Thanks a lot, and I’ll wait for your feedback.

Best,
Nicolás Noya

@kulbachcedric
Copy link
Contributor

Hi @NicolasNoya
no worries!
I left you some comments :-)
I think we could remove the MemStream class, as the MemStreamPCA is currently the only class that implements MemStream.

Just another question, would you have an idea to benchmark the Anomaly detection algorithms. This is currently missing within the benchmark in river and deep-river.

Thanks again for your contribution!

Best
Cedric

@NicolasNoya
Copy link
Author

Hi @kulbachcedric

This weekend, I’ll try something and see how we could benchmark the anomaly detection algorithms within the framework.

I will also update the code based on your comments!

Best,
Nicolás

@NicolasNoya
Copy link
Author

Hello @kulbachcedric,
I've been working on the code, and I hope you like this version. I made some improvements to function naming and refined a few methods that were a little sloppy.

I also did some research on anomaly detection benchmarking. It seems that the most common metrics are ROC AUC and PR AUC, and I personally like recall on the anomalous class (I think is called sensitivity in anomaly detection). If you’d like, I can start implementing benchmarking for these metrics this week. I can also include memory and runtime measurements, similar to previous River benchmarks.

Please let me know if you spot anything that could be improved in the code, or if anything seems off.

Thanks a lot, and I look forward to your feedback.

Best regards,
Nicolás Noya

@kulbachcedric
Copy link
Contributor

Hi @NicolasNoya
nice one!
Actually the changes within the setup.py still appear within the changes.
I would suggest add add docstrings to every function you are adding.

For the Benchmarks, we could create a new issue?

Best
Cedric

@kulbachcedric kulbachcedric marked this pull request as draft February 9, 2026 07:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants