AI at AMD
@AIatAMD
Advancing AI innovation together. Built with devs, for devs. Supported through an open ecosystem. Powered by AMD. #TogetherWeAdvance
Smarter Analytics Start Here! @AMD and @VoltronData have teamed up to power the future of analytics! Introducing Theseus—optimized with AMD Instinct GPUs for unmatched performance. No bottlenecks. Faster insights. Superior collaboration. Discover how this collaboration…

Lemonade Server is a local LLM server built in Python, agile enough to support the ever-evolving LLM ecosystem, robust enough for production. Here's how we made it fast, native, and simple to use: bit.ly/4mdc52v

🚀 @MirantisIT + AMD = Next-gen AI & HPC done right Powered by AMD Instinct MI300X GPUs (192 GB HBM3 & 5.3 TB/s BW), Mirantis k0rdent delivers cloud-native scalability & DevOps-driven GPU orchestration on #Kubernetes. Read more: bit.ly/44GMZTB #Mirantis #AMD…

This tutorial provides a walk through of the process to profile the Scout-17B-16E-Instruct model using vLLM on AMD GPUs with ROCm — complete with kernel traces and Perfetto visualizations. Read Tutorial → rocm.docs.amd.com/projects/ai-de…
What does it take to build an AI-generated film? Go behind the scenes of VOID RUN with @jtatarchuk from @TensorWaveCloud, @alexmashrabov from @higgsfield_ai, and @AMD as we explore how we brought Em to life using custom models and MI325 GPUs. Watch the full video here:…
Discover how to harness AI and AMD MI300X GPUs to vibe-code your own games, with step-by-step guidance on coding and prompting techniques. We used DeepSeek-R1 to generate a Pac-Man-inspired game in just 9 prompts! Here’s how we did it: rocm.blogs.amd.com/artificial-int…
AMD and @xmpro are revolutionizing AI at the industrial edge with local LLMs, fast inferencing, and full data sovereignty, all powered by Ryzen AI + Lemonade Server. Learn how you can utilize edge devices for your industrial AI needs here: amd.com/en/developer/r…

Proud of the team and outstanding work!!! Extending Instella family of models with text-to-image model trained from scratch on @AMD MI300X; fully open (dataset, training code, checkpoints and a detailed blog) to help reproducibility and pushing research forward. Not only that,…
Run local LLMs in minutes with Lemonade Server, no coding needed. Works with Open WebUI, Microsoft AI Dev Gallery, and more on AMD Ryzen AI laptops. Watch the demo 👇 youtube.com/watch?v=mcf7dD…
Want to train text-to-image diffusion model from scratch in a less than a day? With deferred patch masking introduced by MicroDiT to reduce sequence length, high compression latent space introduced by DC-AE that achieve 32x compression ratio and improved representation alignment…
We’re thrilled to collaborate with the @HazyResearch @StanfordAILab, led by Chris Ré, to power Minions, their cutting-edge agentic framework tackling the cost-accuracy tradeoff in modern AI systems. This innovation is enabled on AMD Ryzen AI, thanks to seamless integration with…

🚀 We’re excited to partner with @HuggingFace to launch a new section of their MCP Course: Local Tiny Agents with AMD NPU and iGPU Acceleration — powered by Lemonade Server 🍋 github.com/lemonade-sdk/l… In this hands-on module, you’ll learn how to: ✅ Accelerate end-to-end Tiny…

When model architecture meets hardware fluency, magic happens.
The story of hybrid architectures is honestly fascinating! I've been diving deep into why Transformers became the default choice, and looking at new model architectures. It's not because "attention is all you need" (though catchy!) It's because they exploited GPU parallelism so…
What does sovereign AI really mean, and why does it matter to you? @kbsdigital breaks it down in this interview: youtube.com/watch?v=buTrEG…
Interested in what @youravgtechbro thinks is the single biggest advantage developers can have within this new age of AI? Listen in here. #TogetherWeAdvance
FM2 (Liquid Foundation Model 2) from Liquid AI just dropped today with 3 model weights: 350M, 700M, 1.2B. LFM2 is specifically designed to provide the fastest on-device gen-AI experience across the industry. What's more is LFM2 has been optimized on AMD Ryzen AI day 0 and works…
Today, we release the 2nd generation of our Liquid foundation models, LFM2. LFM2 set the bar for quality, speed, and memory efficiency in on-device AI. Built for edge devices like phones, laptops, AI PCs, cars, wearables, satellites, and robots, LFM2 delivers the fastest…
Follow this tutorial to learn how to set up the @OpenAI Triton development environment and optimize Triton kernel performance on AMD GPUs. rocm.docs.amd.com/projects/ai-de…
