InclusionAI
@InclusionAI666
Open-sourced projects conducted by Ant Group,including Ling,AReal,AWorld. Dedicated our efforts towards AGI,guided by fairness, transparency, and collaboration.
Go download our new modal,we’re looking forward for your feedback.🤗
Ming-lite-omni v1.5 🔥 upgrade version of Ming-lite-omni, by @InclusionAI666 huggingface.co/inclusionAI/Mi… ✨ 20.3B / 3B active - MoE ✨ SOTA video understanding via 3D MRoPE + curriculum learning ✨ Real time speech synthesis + dialect support ✨ Enhanced multimodal generation with…
Reproduce IMO 2025 by AWorld right now!
🚀 Built a Multi-Agent System in 6h — Solved 5/6 IMO 2025 Problems! Inspired by “Gemini 2.5 Pro Capable of Winning Gold”, we validated its key insight: collective intelligence wins. We took this insight and, using our AWorld multi-agent framework, built a collective…
🚀 Built a Multi-Agent System in 6h — Solved 5/6 IMO 2025 Problems! Inspired by “Gemini 2.5 Pro Capable of Winning Gold”, we validated its key insight: collective intelligence wins. We took this insight and, using our AWorld multi-agent framework, built a collective…
🚀Meet ABench, an open-sourced, ever-evolving benchmark suite created to push Large Language Models beyond generic Q&A and into real expert territory. 👉Multi-disciplinary challenges Physics ⚛️, actuarial science 📊, logic 🧩, law ⚖️, psychology 🧠 and more. Each track is…
🚀 Meet M2-Reasoning-7B: AI's New Brain for General & Spatial Reasoning! 😇We're thrilled to introduce M2-Reasoning-7B, an unified model that excels in both abstract and spatial tasks. Key Highlights: 📊 High-Quality Data Pipeline: Generates vast, top-tier reasoning data. 🧠…
Very helpful feedback, inclusionAI ranked 11 on Chinese Open Source Heatmap. Keep in touch with our open-sourced work. @zzqsmall @jxwuyi @gujinjie
The Chinese Open Source Heatmap is live 🔥 You can now track the companies/ research labs/ communities powering China’s open source AI movement. huggingface.co/spaces/zh-ai-c… Some highlights: ✨Giant Tech are investing more in open source. -Alibaba: Full stack open ecosystem…
Agent is just get started, GAIA isn’t over.👇This is AWorld’s answer. Talk to AWorld team 🚀
🎉 AWorld just hit Top 5 on the GAIA Test Leaderboard! Proud to be the #1 open-source project—and the only one in GAIA's top 10. GAIA taught us a lot, especially that self-improving general agent remains an open challenge. Some say, "The GAIA game is over," but we strongly…
🎉 AWorld just hit Top 5 on the GAIA Test Leaderboard! Proud to be the #1 open-source project—and the only one in GAIA's top 10. GAIA taught us a lot, especially that self-improving general agent remains an open challenge. Some say, "The GAIA game is over," but we strongly…
Remarkable work on reasoning model🚀
🚀 Meet Ring-lite! Our newly open-sourced lightweight reasoning model that achieves SOTA performance while maintaining exceptional efficiency. Built upon the Ling-lite-1.5 MoE architecture (16.8B total params, 2.75B active), Ring-lite matches the performance of dense models 3x…
🚀 Meet Ming-Omni! Our new open-sourced unified multimodal model series (including Ming-lite-omni and Ming-plus-omni), which are capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation. 🔥 This release:…
Please check out our Ming-Omni Technical Report: arxiv.org/abs/2506.09344
We will continue to optimize our work,please stay tuned with Ming-Omni series.
Ming-Omni A Unified Multimodal Model for Perception and Generation
#AReaL We will keeps pushing the RL study on the frontier this year. 😃
Reinforcement Learning (RL) is a fundamental component in training large language models, typically involving massive parallelization of alternating tasks: LLM training and model generation. A massive advancement in the efficiency of RL has been shared by Tsinghua University (at…
AReaL boba² is a significant upgrade of AReaL boba,notably introduced Asynchronous RL and achieved meaningful breakthroughs . Hope they can assist your reproduction work.
We release fully async RL system AReaL-boba² for LLM & SOTA code RL w. Qwen3-14B! @Alibaba_Qwen #opensource 🚀system&algorithm co-design → 2.77x faster ✅ 69.1 on LiveCodeBench 🔥 multi-turn RL ready 🔗 Project: github.com/inclusionAI/AR… 📄 Paper: arxiv.org/pdf/2505.24298 1/3👇
Our new work on unified multimodal model,looking for your feedback.
🚀 Meet Ming-Omni! Our new open-sourced unified multimodal model series (including Ming-lite-omni and Ming-plus-omni), which are capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation. 🔥 This release:…
Introducing PromptCoT-Mamba,the first-ever reasoning model built purely on Mamba(no attention,no KV-cache)! Outperforms Gemma3-12B by: +12.3% on AIME'24 +5.4% on AIME'25 +7.7%on LiveCodeBench ⚡3.66×faster inference on 24GB GPU Constant memory→Perfect for edge deployment…


Pleasured to share our work 😃,stay tuned with us.@InclusionAI666
Ming-Lite-Uni: An Open-Source AI Framework Designed to Unify Text and Vision through an Autoregressive Multimodal Structure Researchers from Inclusion AI, Ant Group introduced Ming-Lite-Uni, an open-source framework designed to unify text and vision through an autoregressive…
Congrats👏👏👏 AWorld provides a robust, reproducible playground where LLMs meet powerful tools. They are hiring ,see more details in Ant Group’s exhibition. #ICLR2025
🏆 Aworld Just achieved #1 among open-source frameworks on GAIA pip install aworld github.com/inclusionAI/AW… @InclusionAI666 🚀 Why AWorld? The journey from basic prompts to complex multi-agent systems is challenging. AWorld accelerates this evolution:#AI #OpenSource #MultiAgent