-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Closed
Labels
Description
The Feature
Add support for CompactifAI as a LLM provider.
Motivation, pitch
CompactifAI offers highly compressed versions of leading language models, delivering up to 70% lower inference costs, 4x throughput gains, and low-latency inference with minimal quality loss (<5%).
Its OpenAI-compatible API makes integration straightforward, while enabling developers to build ultra-efficient, scalable AI apps with superior concurrency and resource efficiency.
Adding CompactifAI would give users a cost-effective, high-performance provider option.
Website / Social links
- Docs: https://docs.compactif.ai/
- Artificial Analysis: https://artificialanalysis.ai/providers/compactifai