WEKA
@WekaIO
NeuralMesh™ by WEKA® - The world's only storage system purpose-built for AI. Accelerate performance, deploy anywhere, grow stronger with scale.
🚨 Live from #RAISESummit: WEKA unveils NeuralMesh Axon—breakthrough storage for exascale #AI. ⚡ 10x faster checkpointing ⚡ 20x faster time-to-first-token 📈 90%+ GPU utilization Built for LLMs, agentic AI & real-time inference. weka.ly/4lGLfzq
At #AIInfraSummit, WEKA CTO @shimonbd and @nvidia's Nave Algarici shared how we’re helping orgs scale #AI inferencing—combining WEKA’s real-world infrastructure with NVIDIA acceleration. 🎥 Watch the full convo: weka.ly/44QQGF1
GenAI at scale needs storage that can keep up. Our NCP-certified architectures for GB200 NVL72 & HGX systems deliver 1+ GB/s per GPU, scale to 18K+ GPUs, and run on fast, efficient Micron 9550 SSDs. Get the full breakdown. 👇 weka.ly/3IsGoUi
.@liranzvibel delivered a dynamic keynote session during @RaiseSummit exploring how the rise of the token economy is reshaping #AI. From scalable infra to sustainable innovation, the future of AI depends on balancing speed and efficiency. #RAISESummit



.@togethercompute serves 500K+ devs with one of the fastest inference engines in #AI. To scale fast and slash latency, they leverage WEKA. 👉 See how: weka.ly/4o1tf4R
Speed is critical in #AI infrastructure—but efficiency is everything. NeuralMesh goes beyond traditional dedupe with similarity-aware optimization that makes your NVMe storage work smarter, not harder. Here's how. 👇 weka.ly/4kS6fm5
In GPU-heavy #AI workloads, every microsecond matters. Our Augmented Memory Grid + our new NIXL plugin for @nvidia Dynamo = blazing-fast inference at near-memory speeds. And we’re just getting started. Learn more. 👇 weka.ly/413LQDd
AI: “I need low latency, infinite scale, and zero drama.” NeuralMesh: “Say less.” ⚡ Microsecond latency 📦 Exabyte scale 🤖 K8s-native magic Legacy storage could never. Explore the white paper. ⬇️ weka.ly/455mTc0 weka.ly/450ap5s
.@nvidia GB200 systems are built to push the limits of #AI + ML. NeuralMesh—validated for NVIDIA’s NCP RA—scales to 18K+ GPUs with speed, simplicity, and power. Take a closer look at the full reference architecture. 👇 weka.ly/45gt12f
.@huggingface is making #AI accessible to all—and we're proud to help. 🤝 Running on WEKA + @awscloud, they’re accelerating model loading, boosting training & inference, and cutting latency. Learn more about how we're fueling the next wave of AI innovation. 👇…
LLMs feeling slow and clunky? Our Augmented Memory Grid delivers up to 75X faster time to first token, slashing latency and handling huge prompts with ease—no extra GPUs required. Learn more. ⬇️ weka.ly/46O3NJH
NeuralMesh is the only storage system built specifically for #AI. Want to see how it helps teams overcome performance and scalability hurdles? Take a closer look 👇 weka.ly/4lZkjLw
Boxed in by outdated infrastructure? That's a dead end for your #AI workloads. NeuralMesh is built for the scale, speed, and complexity modern AI demands. Discover how it works: weka.ly/4nL7WUR
What sets #AI leaders apart? ⚡️ They move fast 📈 They scale smart 🛠️ They’ve left legacy infrastructure behind Exactly what we heard at #RAISESummit—and what we see every day with the teams building on WEKA. weka.ly/4kLAKKh
Traditional storage slows down at scale. NeuralMesh gets faster. 100-node cluster: ~1 min recovery 50-node cluster: ~10 min Not magic—just math. 👇 weka.ly/40fZKSx
💬 “Partner solutions like WEKA’s NeuralMesh Axon… provide a critical foundation for accelerated inferencing…” — Marc Hamilton, @nvidia We're proud to power the infrastructure behind the world’s most advanced AI factories. Discover the full capabilities of NeuralMesh Axon:…
At @RaiseSummit, we asked what sets the front-runners apart from those trailing behind in the #AI race. The answers were clear—and they align with what we see every day working with the world’s most innovative AI organizations. #RAISESummit
The @LinusTech team pushed modern infra to the limit—calculating π to 300 trillion digits. With help from WEKA, they hit 150 GB/s throughput, ran nonstop for months, and recovered from power cuts like pros. Here’s how it worked: weka.ly/3IDBeVh

That’s a wrap on #RAISESummit 2025! We unveiled NeuralMesh Axon, dove deep with partners, and explored how to scale enterprise #AI without compromise. 💥 Let’s continue to build the future of AI—together.
What makes NeuralMesh Axon a game-changer for large-scale #AI? @liranzvibel breaks down how GPU-native storage boosts performance, slashes infra costs & maximizes GPU utilization. 🔓 Unlock the full potential of your GPUs: weka.ly/45WPRNe #RAISESummit
Don’t let legacy storage slow you down. NeuralMesh Axon brings ultra-low latency, distributed storage inside GPU servers—eliminating I/O bottlenecks, boosting utilization, and cutting costs. 📄 Check out the solution brief: weka.ly/3Iz8kWi
