Qdrant
@qdrant_engine
High-performance Rust-based vector search engine. https://discord.com/invite/qdrant
๐ ๐ค๐ฑ๐ฟ๐ฎ๐ป๐ ๐ญ.๐ญ๐ฑ ๐ถ๐ ๐ต๐ฒ๐ฟ๐ฒ! ๐ With smarter quantization, stronger text filtering, and key performance upgrades. Here's what's new: โก๏ธ 1.5 & 2-bit + asymmetric quantization for up to 24ร compression with near-scalar accuracy โก๏ธ Built-in multilingual text indexโฆ

โฐ Scrolling on a Sunday? Donโt forget about the multimodal Search webinar this Thursday! One API for text + image embeddings + vector search with Qdrant Cloud Inference. ๐ Save your spot: try.qdrant.tech/cloud-inferencโฆ

Indexing Faces for Scalable Visual Search ๐ Build your own Googleโstyle photo finder in minutes with face detection, embeddings and Qdrant ๐ with step-by-step preview. ๐ Repo (appreciate a star!): github.com/cocoindex-io/cโฆ ๐ Tutorial: cocoindex.io/blogs/face-detโฆ More details:โฆ
๐ Our July newsletter is out! Have a read: try.qdrant.tech/july-newsletter Want to stay up to date on product news and cool Qdrant stuff? Subscribe: qdrant.tech/subscribe/

Researchers at @ETH_en and @Stanford released an open dataset of 5.8M+ long-form medical QA pairs, each grounded in peer-reviewed literature and designed for RAG. ๐ The pipeline: โช๏ธ Source: 900K+ full-text medical papers (S2ORC) โช๏ธ QA generation via GPT-3.5 with a three-stageโฆ

๐ Building a hybrid search pipeline with Qdrant Cloud Inference ๐ No need to run your own models or wire up external services. Qdrant Cloud now supports fully managed inference: embed, store, and search - all inside your vector database. This hands-on tutorial walks youโฆ

Itโs time to keep up with modern RAG. Stop stuffing entire PDFs into your vector DB. With Tensorlake + @qdrant_engine, you can: - Parse and extract only the useful parts of a doc - Index precise segments like tables or specific sections - Run focused, context-aware searchโฆ
๐จ NEW WEBINAR ALERT ๐จ Learn to embed, store & search with Qdrant Cloud Inference: ๐บText + image embeddings ๐บIndustry-first multimodal search ๐บSingle API, zero glue code ๐บ5M free tokens to start ๐Register now: try.qdrant.tech/cloud-inferencโฆ

๐ข Last chance to register! Tomorrow, @lettria_fr joins Qdrant and @neo4j to reveal how they scaled GraphRAG in production: โ 100M+ embeddings โ Sub-200ms latency โ 25% accuracy boost over traditional RAG Get the full architecture breakdown and learn what actually makesโฆ

โ๐๐๐๐ฉโ๐จ ๐ฉ๐๐ ๐๐๐จ๐ฉ ๐๐ข๐๐๐๐๐๐ฃ๐ ๐ข๐ค๐๐๐ก ๐๐ค๐ง ๐ข๐ฎ ๐ช๐จ๐ ๐๐๐จ๐?โ Itโs one of the most common questions we hear from the Qdrant community. The short answer: thereโs no one-size-fits-all. Language support, tokenizer quirks, inference cost, model size,โฆ

I have 3 free tickets to the Vector Space Day conference from @qdrant_engine Want to win one? ๐ธ Follow me ๐ธ Retweet this tweet Winners selected randomly. Results on Friday Good luck ๐ More info ๐ lu.ma/p7w9uqtz
๐ง @LukawskiKacper on the Data Engineering Podcast: how MCP servers and vector databases are redefining data pipelines for the AI era and why vector search is now core infrastructure. Listen here: dataengineeringpodcast.com/episodepage/stโฆ
โผ๏ธ Retrieval is core to agent execution. We spoke with @cerebral_valley on why agent memory must be semantic, multimodal, real-time, and how Qdrant Cloud Inference delivers embedding + vector indexing in one API call. See the article: cerebralvalley.beehiiv.com/p/introducing-โฆ

๐ข Weโre joining @neo4j and @lettria_fr this Wednesday to break down how Lettria built a scalable GraphRAG system in production, integrating Neo4j for graph reasoning and Qdrant for vector retrieval. With speakers: @LukawskiKacper, @JMHReif, and Romain Albrand. ๐ Details &โฆ

๐จ Call for Speakers: Submit Now! ๐จ Join us at Vector Space Day 2025, a full-day in-person event in Berlin dedicated to the future of vector-native search, retrieval, and AI infrastructure. ๐ Berlin, Germany ๐ Friday, September 26 ๐๏ธ Tickets: โฌ50 + includes access to theโฆ

Spoke at @mlopscommunity meetup yesterday about @qdrant_engine mcp-for-docs Got pics And come onnn, what're these face expression lmaooo
๐ฅ Watch a demo of Qdrant Cloud Inference - now live in Qdrant Cloud. With Cloud Inference, you can generate embeddings directly inside your cluster. That means: โ Embeddings generated inside your Qdrant cluster โ No external model calls or data transfer โ Just one API callโฆ

Ready to build production-ready RAG systems? Our Open Source Engineer @itsclelia shares battle-tested lessons from building vector search applications in the wild. ๐ง Text extraction strategies: when to use simple parsing vs advanced OCR-based solutions like LlamaParse forโฆ