Embeddings & Semantic Search
Search your data with natural language. Local embeddings, zero API costs, 100+ languages.
The @semantic Modifier
Add @semantic to any text field to enable natural language search:
entity Product {
name: text
description: text @semantic // Enables AI search
price: float
}
// Now search with natural language
results = search "comfortable running shoes" in Product by description
When you save an entity with a @semantic field, FLIN automatically generates a vector embedding and stores it. Searches find semantically similar content, not just exact matches.
Search Syntax
// Basic search
results = search "machine learning tutorials" in Article by content
// With limit
results = search "budget laptops" in Product by description limit 10
// Combined with filters
results = search "vegan recipes" in Recipe by instructions
.where(category == "dinner")
.limit(20)
Search Results
Search returns entities with a relevance score:
results = search "fast delivery" in Product by description
{for result in results}
<div>
<h3>{result.entity.name}</h3>
<p>{result.entity.description}</p>
<small>Relevance: {result.score}</small>
</div>
{/for}
Local vs Cloud Embeddings
FLIN supports both local (free, offline) and cloud (premium) embedding providers:
| Provider | Model | Dimensions | Cost |
|---|---|---|---|
| Local (default) | multilingual-e5-small | 384 | Free |
| Local | multilingual-e5-large | 1024 | Free |
| FLIN Cloud | flin-embed | 384 | Pay-as-you-go |
| OpenAI | text-embedding-3-small | 1536 | $0.02/1M tokens |
| Voyage AI | voyage-3 | 1024 | $0.06/1M tokens |
Configuration
ai {
// Default: local embeddings (free, offline)
embeddings: "local"
// Optional: use larger model for better quality
model: "multilingual-e5-large"
}
// Or use cloud embeddings
ai {
embeddings: "openai" // Requires OPENAI_API_KEY
}
Multilingual Support
FLIN's default embedding model supports 100+ languages:
entity Article {
title: text
content: text @semantic
}
// Search in any language
results = search "apprentissage automatique" in Article by content // French
results = search "aprendizaje automático" in Article by content // Spanish
results = search "machine learning" in Article by content // English
// All three find the same relevant articles!
Search in French, find articles written in English. The embedding model understands meaning across languages.
Hybrid Search
FLIN combines BM25 (keyword) + vector (semantic) search for best results:
BM25 (Keywords)
Exact matches for product codes, dates, SKUs. Never misses literal terms.
Vectors (Semantic)
Understands meaning. "fast delivery" finds "quick shipping" and "express courier".
Result: ~60% fewer wrong-context answers. Near-zero hallucinations.
Real Example: Knowledge Base
entity KBArticle {
title: text
content: text @semantic
category: text
}
query = ""
results = []
<main>
<input
placeholder="Search knowledge base..."
value={query}
input={
if query.len > 2 {
results = search query in KBArticle by content limit 10
}
} />
<div class="results">
{for result in results}
<article>
<h3>{result.entity.title}</h3>
<span class="category">{result.entity.category}</span>
<p>{result.entity.content.truncate(200)}</p>
<small>Score: {(result.score * 100).round()}%</small>
</article>
{/for}
</div>
</main>
Offline-First
FLIN embeddings work completely offline:
- No API calls needed for local embeddings
- Model downloads once, cached forever (~100MB)
- Works on airplane, in tunnels, anywhere
- Zero embedding costs for local mode
The embedding model downloads on first use (~100MB). Subsequent runs are instant.