Contrast with sparse retrieval

Keywords: dense retrieval,rag

Dense retrieval uses learned neural embeddings to find relevant documents, outperforming traditional keyword methods. Contrast with sparse retrieval: Sparse (BM25, TF-IDF) uses exact term matching with inverted indices; dense maps text to continuous vector space where similar meanings cluster. Key models: DPR (Dense Passage Retrieval), ColBERT (late interaction), Contriever, GTR, E5, BGE. Training: Contrastive learning - positive pairs (query, relevant doc) should be close, negatives should be far. Architecture: Bi-encoder (separate query/doc encoders, fast), cross-encoder (joint attention, accurate but slow). Indexing: Pre-compute document embeddings, store in vector database with ANN index (HNSW, FAISS). Inference: Encode query, find nearest neighbors in milliseconds. Advantages: Semantic understanding, handles vocabulary mismatch, generalizes to unseen queries. Limitations: Requires training data, embedding quality critical, may miss keyword-specific matches. Best practice: Combine with BM25 in hybrid approach for production RAG systems.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT