You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: web/ahnlich-web/docs/client-libraries/go/go.md
+138-6Lines changed: 138 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,14 +3,146 @@ title: Go
3
3
sidebar_position: 10
4
4
---
5
5
6
-
# Go
6
+
<!-- import GoIcon from '@site/static/img/icons/lang/go.svg' -->
7
7
8
-
The Ahnlich Go client library is a Go package that allows you to interact with [Ahnlich DB](/docs/components/ahnlich-db/ahnlich-db.md) and [Ahnlich AI](/docs/components/ahnlich-ai/ahnlich-ai.md).
8
+
# ⚙️ Ahnlich Go SDK
9
9
10
+
Official Go client for Ahnlich similarity‑search engine, providing idiomatic access to both **DB** (exact vector search) and **AI** (semantic, embedding‑based search) stores. Requires a running Ahnlich backend (`ahnlich-db` at port 1369 and/or `ahnlich-ai` at port 1370).
10
11
11
-
## Installation
12
-
[Installation and Usage](installation-and-usage.md)
12
+
Visit the source and reference: [GitHub/ahnlich-client-go](https://github.com/deven96/ahnlich/tree/main/sdk/ahnlich-client-go)
13
13
14
+
---
15
+
16
+
## ⚙️ Installation
17
+
18
+
Ensure you have Go ≥ 1.20 and a running Ahnlich backend:
19
+
20
+
```bash
21
+
go get github.com/deven96/ahnlich/sdk/ahnlich-client-go@latest
This uses gRPC to connect to the Ahnlich backend. Always check .Close() on exit to clean up resources.
49
+
50
+
🧱 Creating a DB Store
51
+
**What's a DB Store?**
52
+
A DB Store is a fixed-dimension embedding container for exact nearest‑neighbor search using Cosine, Euclidean (L2), or DotProduct metrics. You choose the vector dimension upfront, and can optionally configure metadata predicate indexes to accelerate filtering.
53
+
54
+
To create:
55
+
56
+
```go
57
+
58
+
```
59
+
60
+
This will register the store with your chosen embedding size and default similarity algorithm.
61
+
62
+
🧠 Creating an AI Store
63
+
**What's an AI Store?**
64
+
An AI Store simplifies semantic search by accepting raw input (text or images), converting it to embeddings during ingestion and querying. It requires two models:
65
+
66
+
– IndexModel: used when ingesting raw data.
67
+
– QueryModel: used when computing embeddings for search queries.
68
+
69
+
These models can be identical (e.g. AIModel_ALL_MINI_LM_L6_V2) but must produce the same embedding dimension, such as 768. This flexibility allows selecting different pipelines for indexing vs querying without breaking compatibility.
70
+
71
+
All original inputs and metadata are preserved for retrieval alongside results.
72
+
73
+
To create:
74
+
75
+
```go
76
+
77
+
```
78
+
79
+
💾 Storing Entries
80
+
In a DB Store:
81
+
```go
82
+
83
+
```
84
+
85
+
In an AI Store (raw ingestion):
86
+
```go
87
+
```
88
+
89
+
If storing images, replace RawText with RawImage (of type []byte).
90
+
91
+
🔍 Searching for Closest Matches
92
+
Both DB and AI stores expose GetSimN() for similarity search. Only linear search algorithms (Cosine, Euclidean (L2), DotProduct) are supported. Approximate indexing (e.g. HNSW, locality‑sensitive hashing) is on the roadmap but not yet available
93
+
github.com
94
+
.
95
+
96
+
DB Store search:
97
+
```go
98
+
99
+
```
100
+
101
+
AI Store search by query text:
102
+
```go
103
+
104
+
```
105
+
106
+
🧩 Using Metadata Filtering
107
+
You can narrow search results using predicates on metadata—for both DB and AI stores:
108
+
109
+
```go
110
+
111
+
```
112
+
113
+
Filter is applied after similarity ranking, so you still retrieve top‑N relevant items that meet the predicate.
Copy file name to clipboardExpand all lines: web/ahnlich-web/docs/getting-started/getting-started.md
+96-2Lines changed: 96 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,101 @@
2
2
sidebar_position: 20
3
3
title: 🚀 Getting started
4
4
---
5
-
import DocCardList from '@theme/DocCardList';
6
5
6
+
# 🚀 Getting Started with Ahnlich
7
7
8
-
<DocCardList />
8
+
Pick one of the three available installation methods below to launch **Ahnlich** within minutes — from containers, pre-built binaries, or by building from source.
9
+
10
+
11
+
## 🐳 1. Install via **Docker***(Recommended for isolated environments & CI)*
12
+
13
+
Pull the latest official container images:
14
+
15
+
```bash
16
+
docker pull ghcr.io/deven96/ahnlich-db:latest
17
+
docker pull ghcr.io/deven96/ahnlich-ai:latest
18
+
```
19
+
20
+
Run both services locally with default ports (DB → 1369, AI → 1370):
21
+
22
+
```bash
23
+
docker run -d --name ahnlich-db -p 1369:1369 ghcr.io/deven96/ahnlich-db:latest
24
+
25
+
docker run -d
26
+
--name ahnlich-ai
27
+
-p 1370:1370
28
+
ghcr.io/deven96/ahnlich-ai:latest`
29
+
```
30
+
31
+
For more advanced setups—including tracing, persistence, and model caching—refer to the example [`docker-compose.yml`](https://github.com/deven96/ahnlich/blob/main/docker-compose.yml) in the main repository.
32
+
33
+
## 2. Download Pre-built Binaries *(Great for local servers & headless deployment)*
34
+
35
+
You can download OS‑specific binaries (for `db` and `ai`) from the [Ahnlich GitHub Releases page](https://github.com/deven96/ahnlich/releases). [GitHub](https://github.com/deven96/ahnlich/releases?utm_source=chatgpt.com)
36
+
37
+
Example steps for a Linux (`x86_64-unknown-linux-gnu`) environment:
38
+
39
+
```bash
40
+
# Download the "db" binary for your version/platform
tar -xzf x86_64-unknown-linux-gnu-ahnlich-db.tar.gz
43
+
chmod +x ahnlich-db
44
+
./ahnlich-db --help
45
+
```
46
+
47
+
Repeat the same for the `ahnlich-ai` binary, substituting `db` → `ai` and the correct filename.
48
+
49
+
You can find complete download instructions (including Windows / macOS options) in the [official repository README](https://github.com/deven96/ahnlich/blob/main/README.md). [GitHub](https://github.com/deven96/ahnlich?utm_source=chatgpt.com)
50
+
51
+
52
+
## 3. Build from Source with Cargo *(For developers and Rust contributors)*
53
+
54
+
Get up-to-date source and compile the binaries natively:
Once built, find the executables in`target/release/`. Move them into your `$PATH` or launch directly:
64
+
65
+
```bash
66
+
./target/release/ahnlich-db --help
67
+
./target/release/ahnlich-ai --help
68
+
```
69
+
70
+
This method is ideal for reviewing code, customizing defaults, or staying on the cutting edge. Ensure you have the Rust toolchain installed via [rustup.rs](https://rustup.rs/). [GitHub](https://github.com/deven96/ahnlich/blob/main/README.md?utm_source=chatgpt.com)
71
+
72
+
## **✅ Quick Comparison Table**
73
+
74
+
| Method | External Dependencies | Best For | Upgrade Workflow |
Copy file name to clipboardExpand all lines: web/ahnlich-web/docs/overview.md
+87-1Lines changed: 87 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,4 +3,90 @@ title: Overview
3
3
sidebar_position: 10
4
4
---
5
5
6
-
# Overview
6
+
# Overview
7
+
8
+
✨ **Ahnlich** is a modern, in-memory **vector database** paired with a smart **AI proxy layer**, designed to simplify the use of semantic embeddings for developers and AI builders with zero external dependencies.
9
+
10
+
---
11
+
12
+
## 🧠 What is Ahnlich?
13
+
14
+
### 🚀 In-Memory Vector Database
15
+
Ahnlich provides an ultra-fast, RAM-resident vector store with:
16
+
17
+
-**Pure linear similarity search** using **Cosine Similarity**, **Euclidean Distance (L2)**, or **Dot Product** to retrieve semantically similar vectors—ideal for small-to-medium data sets and prototyping.
18
+
-**Dynamic update support**—add, update, or delete vectors on-the-fly without full index rebuilds.
-**Zero external service dependency**—runs as a self-contained binary with no server or cluster required.
21
+
22
+
*(Support for approximate methods like HNSW or LSH is on the roadmap.)*
23
+
24
+
### 🤖 AI Proxy Layer
25
+
Built-in intelligent middleware for embedding-based AI workflows:
26
+
27
+
- Accepts *raw text inputs*, forwards to your preferred embedding provider or LLM, and **caches embeddings locally** to reduce redundant API calls.
28
+
- Implements **Retrieval-Augmented Generation (RAG)** workflows—pull relevant document embeddings, optionally compose prompts, and send to LLMs.
29
+
- Tracks **usage metadata** (timestamps, model IDs, query context) for observability and tuning.
30
+
31
+
Together, these allow building **AI-aware applications** quickly without managing separate services.
32
+
33
+
---
34
+
35
+
## 📚 Vector Databases: Explained
36
+
37
+
A vector database is purpose-built for **semantic similarity workloads**—it transforms raw content (text/images) into **high-dimensional numeric vectors** alongside their metadata, then stores and retrieves them efficiently for meaning-based search.
38
+
39
+
While classic nearest-neighbor search relies on expensive all-pairs or linear scans, modern systems often use **index structures** for approximate methods like HNSW, LSH, or Product Quantization—trading off precision for speed.
40
+
41
+
Ahnlich currently supports only **exact, linear similarity search** over updated vectors using these distance metrics:
|**Cosine**| Measures the **angle** between vectors (direction) |
46
+
|**Euclidean (L2)**| Computes the straight-line **distance** in vector space |
47
+
|**Dot Product**| Combines **magnitude + alignment**, fast when pre-normalized |
48
+
49
+
*(Note: Euclidean/L2, cosine, and dot-product are closely related at constant scale.)*
50
+
51
+
---
52
+
53
+
## 🌟 Product Pillars
54
+
55
+
-**Lightning-fast embedding store** in pure memory, optimized for low-latency lookups.
56
+
-**Hybrid similarity filtering**, combining semantic distance with metadata constraints.
57
+
-**AI-aware proxy engine**, serving as a bridge between your app, embeddings, and LLMs.
58
+
-**Lightweight, deployment-free integration**—no server, cluster, or managed runtime needed.
59
+
-**Developer-first experience**, focusing on speed and simplicity without sacrificing flexibility.
60
+
61
+
---
62
+
63
+
## 🛠️ Use Cases & Applications
64
+
65
+
-**Document Search & FAQ Retrieval** – Store docs, Markdown content, or product specs as embeddings. Ahnlich retrieves them semantically using cosine/L2, refined by filters like categories or tags.
66
+
-**RAG Chat Memory** – Maintain conversational context via embeddings. On each turn, fetch the most relevant past chunks to enrich LLM prompts.
67
+
-**Semantic Retrieval of Logs & Snippets** – Developer tooling to find code or log entries that are meaningfully similar—not just keyword matches.
68
+
-**Recommendation & Similarity Engines** – Turn items (users, documents, products) into vectors; run coherent similarity + metadata filters (e.g. user locale, rating).
69
+
-**Edge & Prototype AI Apps** – No cloud dependency, minimal footprint—ideal for prototyping, embedded deployments, or local development.
70
+
71
+
---
72
+
73
+
## 👥 Who Is It For?
74
+
75
+
-**Developers and AI/Python engineers** building embedding-based logic or semantic apps.
76
+
-**Startups & MVP coders** needing fast local experimentation without infrastructure overhead.
0 commit comments