-
Notifications
You must be signed in to change notification settings - Fork 0
Closed
Description
Currently, llm_composer
supports advanced features like fallback models and provider routing—but only when using the OpenRouter backend. OpenAI, Ollama, and Bedrock do not have automatic fallback capabilities.
Today is failing OpenRouter, so would be nice to implement the fallback providers at llm_composer library level.
The desired behavior includes:
- Detecting provider errors (e.g., network issues, rate limits, downtime).
- Automatically switching to an alternate, configured provider.
- Optionally allowing provider order or fallback list configuration independent of OpenRouter settings.
- Retaining clear tracing/logging so users know when a fallback occurred.
This improvement would strengthen robustness, ensuring llm_composer
can gracefully continue operations when individual providers fail.
Metadata
Metadata
Assignees
Labels
No labels