跳到主要内容

Model

From foundation language models to unified multimodal perception and generation — open-source and built for everyone.

Ling: A MoE LLM Provided and Open-sourced by InclusionAIFoundation

A Mixture-of-Experts large language model fully open-sourced by InclusionAI.

May 8, 2025
Ring: A Reasoning MoE LLM Provided and Open-sourced by InclusionAIReasoning

A reasoning-focused MoE language model open-sourced by InclusionAI.

Apr 1, 2025
Introducing Ring-lite-2507Reasoning

An updated lightweight reasoning model with improved performance.

Aug 5, 2025
PromptCoT & PromptCoT-Mamba: Advancing the Frontiers of ReasoningReasoning

Novel chain-of-thought prompting methods for enhanced reasoning.

Apr 1, 2025
M2-Reasoning: Empowering MLLMs with Unified General and Spatial ReasoningReasoning

A unified reasoning framework covering general and spatial understanding.

Jul 11, 2025
Ming-Omni: A Unified Multimodal Model for Perception and GenerationMulti-Modal

Unified multimodal model processing images, text, audio, and video.

Jun 11, 2025
Introducing Ming-Lite-Omni V1.5Multi-Modal

Updated lightweight omni model with improved multimodal capabilities.

Jul 21, 2025
Ming-Lite-Omni-Preview: A MoE Model Designed to Perceive a Wide Range of ModalitiesMulti-Modal

Preview release of Ming-Lite-Omni with broad modality support.

May 5, 2025
Ming-flash-omni-Preview: A Sparse, Unified Architecture for Multimodal Perception and GenerationMulti-Modal

Flash variant with sparse architecture for efficient multimodal processing.

Oct 28, 2025
Segmentation-as-Editing for Unified Multimodal AIMulti-Modal

Extends Ming-Lite-Omni with segmentation-as-editing capabilities.

Sep 13, 2025
Ming-Lite-Uni: Advancements in Unified Architecture for Natural Multimodal InteractionMulti-Modal

Unified architecture enabling natural interaction across modalities.

May 7, 2025
Ming-Omni-TTS: Simple and Efficient Unified Generation of Speech, Music, and Sound with Precise ControlMulti-Modal

Text-to-speech and audio generation with fine-grained control.

Mar 4, 2026
Ming-UniAudio: Speech LLM for Joint Understanding, Generation and Editing with Unified RepresentationMulti-Modal

Speech language model unifying audio understanding and generation.

Oct 1, 2025
Ming-UniVision: Joint Image Understanding and Generation via a Unified Continuous TokenizerMulti-Modal

Unified vision model for joint image understanding and generation.

Oct 1, 2025