You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[](https://star-history.com/#emcie-co/parlant&Date)
173
217
174
218
</div>
175
219
220
+
## 🌟 What Developers Are Saying
221
+
222
+
> _"By far the most elegant conversational AI framework that I've come across! Developing with Parlant is pure joy."_**— Vishal Ahuja, Senior Lead, Customer-Facing Conversational AI @ JPMorgan Chase**
> _"By far the most elegant conversational AI framework that I've come across! Developing with Parlant is pure joy."_**— Vishal Ahuja, Senior Lead, Customer-Facing Conversational AI @ JPMorgan Chase**
196
-
197
241
## 🤝 Community & Support
198
242
199
243
- 💬 **[Discord Community](https://discord.gg/duxWqxKk6J)** - Get help from the team and community
The Ollama service provides local LLM capabilities for Parlant using [Ollama](https://ollama.ai/). This service supports both text generation and embeddings using various open-source models.
4
+
5
+
## Prerequisites
6
+
7
+
1.**Install Ollama**: Download and install from [ollama.ai](https://ollama.ai/)
8
+
2.**Start Ollama server**: Run `ollama serve` (usually starts automatically)
9
+
3.**Pull required models** (see [Recommended Models](#recommended-models) section)
10
+
11
+
## Environment Variables
12
+
13
+
Configure the Ollama service using these environment variables:
14
+
15
+
```bash
16
+
# Ollama server URL (default: http://localhost:11434)
0 commit comments