Choosing the right embedding model for your VDB
The embedding model determines how your documents and queries are represented in the vector database. Picking the right one involves language coverage, domain fit, dimensionality, latency, and cost. Bi-encoders are used for retrieval; cross-encoders for re-ranking; benchmark on your data before committing.
Summary
- Match model to data: language (English, multilingual), domain (legal, medical, code), modality (text, image, multi-modal).
- Trade-offs: dimension (recall vs. memory/speed), model size (latency), fine-tuning for domain.
- Bi-encoder for retrieval; cross-encoder for re-ranking; benchmark on your data for recall@k and latency.
- Open-weight: run locally, fine-tune; API: no ops, pay per token. New model = new latent space—re-embed and re-index when switching.
- Use a labeled set and measure recall@k, MRR/NDCG, latency, and throughput; compare several models on the same data.
Matching model to data
Match the model to your data: for English-only text, models like sentence-transformers (e.g. all-MiniLM, all-mpnet) or OpenAI embeddings are common. For multilingual or non-English content, use a model trained on that language or a multilingual one (e.g. multilingual-e5, paraphrase-multilingual). For code or mixed content, pick a model trained on that modality.
Domain matters—biomedical or legal text often benefits from fine-tuned or domain-pretrained models so that similarity aligns with your notion of relevance. When to use multilingual: when your corpus or queries are in multiple languages; multilingual models embed all in one space so you can search across languages. For single-language, a dedicated model may be better. Pipeline: choose bi-encoder for the VDB; optionally add a cross-encoder for re-ranking; see how text embeddings are generated and cross-encoders vs. bi-encoders.
Trade-offs and benchmarking
Trade-offs: higher dimension (e.g. 768 vs 384) usually gives better recall but more storage and slower ANN search. Lighter models are faster for real-time ingestion and query encoding. Consider whether you need a single model for both indexing and query (symmetric search) or separate models (bi-encoder for retrieval, cross-encoder for re-ranking).
Benchmark on a sample of your data with your target recall@k and latency before committing. Use a labeled set (query, relevant docs); compute recall@k and optionally MRR/NDCG; measure latency and throughput for your batch size; compare several models on the same data. Open-weight (e.g. sentence-transformers): run locally, no per-call cost, can fine-tune. API: no ops, pay per token; limited to provider’s models and versions. You cannot switch models without re-indexing—new model = new latent space; see handling updates to the embedding model.
Frequently Asked Questions
Open-weight vs. API embedding models?
Open (e.g. sentence-transformers): run locally, no per-call cost, can fine-tune. API: no ops, pay per token; limited to provider’s models and versions.
How do I benchmark embedding models?
Use a labeled set (query, relevant docs); compute recall@k and optionally MRR/NDCG. Measure latency and throughput for your batch size. Compare several models on the same data. See measuring recall at k and measuring latency.
When should I use a multilingual model?
When your corpus or queries are in multiple languages. Multilingual models embed all in one space so you can search across languages. For single-language, a dedicated model may be better.
Can I switch models without re-indexing?
No. New model = new latent space. You must re-embed and re-index.