Add support for Non OpenAI models ? #253
Replies: 2 comments
-
|
Hey @ishaan-jaff, it’s being considered, however it’s not currently prioritized. Do you have a specific use case for that? |
Beta Was this translation helpful? Give feedback.
-
|
Non-OpenAI model support for alerting/ops tools is a great ask — the choice of which LLM processes your incident data matters for both cost and latency. For an AIOps tool like Keep, multi-model support enables: Cost-tier routing by alert type — routine, low-severity alerts (disk usage at 82%) can be triaged by a cheap fast model. P0 incidents with complex correlated signals warrant a capable model. This is the 58/11 split in practice for ops workloads. Local models for sensitive data — incident data often contains PII, internal hostnames, credentials in logs. Running a local model (Ollama + Llama 3 70B) means that data never leaves your infrastructure. For security-sensitive alerts, this matters. Latency for real-time triage — fast local models or API calls to smaller hosted models (Haiku-class) are 3-5x lower latency than frontier models. For PagerDuty-style real-time alerting, that latency difference matters. Vendor diversification — if your entire ops workflow depends on OpenAI's API uptime, that's a reliability risk. Supporting multiple providers (Anthropic, Google, Mistral, local) means your alerting pipeline survives API outages from any single provider. Implementation pattern: a model adapter layer that wraps LiteLLM or a similar abstraction, with alert-severity → model-tier mapping in config. New providers just need to satisfy the OpenAI-compatible chat completions API. We built similar provider-agnostic routing for KinthAI's agent inference: https://blog.kinthai.ai/agent-wallet-economic-models-autonomous-agents covers the economic model; the multi-tier routing design: https://blog.kinthai.ai/openclaw-multi-tenancy-why-vm-per-user-doesnt-scale Is the primary driver for non-OpenAI support cost, data privacy, or reliability? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @talboren are you considering adding non OpenAI models? Example Claude2, Azure OpenAI, llama2 etc ?
Beta Was this translation helpful? Give feedback.
All reactions