2026
2025
- April 1 - Ring: A Reasoning MoE LLM Provided and Open-sourced by InclusionAI
- April 1 - PromptCoT & PromptCoT-Mamba: Advancing the Frontiers of Reasoning
- April 1 - AReaL: Ant Reasoning Reinforcement Learning for LLMs
- April 1 - Agentic Learning
- May 5 - Ming-Lite-Omni-Preview: A MoE Model Designed to Perceive a Wide Range of Modalities
- May 7 - Ming-Lite-Uni: Advancements in Unified Architecture for Natural Multimodal Interaction
- May 8 - Ling: A MoE LLM Provided and Open-sourced by InclusionAI
- June 11 - Ming-Omni: A Unified Multimodal Model for Perception and Generation
- July 7 - AWorld: The Agent Runtime for Self-Improvement
- July 8 - ABench: An Evolving Open-Source Benchmark
- July 11 - M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning
- July 21 - Introducing Ming-Lite-Omni V1.5
- August 5 - Introducing Ring-lite-2507
- September 13 - Segmentation-as-Editing for Unified Multimodal AI
- October 1 - Ming-UniVision: Joint Image Understanding and Generation via a Unified Continuous Tokenizer
- October 1 - Ming-UniAudio: Speech LLM for Joint Understanding, Generation and Editing with Unified Representation
- October 28 - Ming-flash-omni-Preview: A Sparse, Unified Architecture for Multimodal Perception and Generation