GITHUB

๐ŸŒŸ Overview

ABench is an evolving open-source benchmark suite designed to rigorously evaluate and enhance Large Language Models (LLMs) on complex cross-domain tasks. By targeting current model weaknesses, ABench provides systematic challenges in high-difficulty specialized domains, including physics, actuarial science, logical reasoning, law, and psychology.

๐ŸŽฏ Core Objectives

  1. Address Evaluation Gaps: Design high-differentiation assessment tasks targeting underperforming question types
  2. Establish Unified Standards: Create reliable, comparable benchmarks for multi-domain LLM evaluation
  3. Expand Capability Boundaries: Drive continuous optimization of knowledge systems and reasoning mechanisms through challenging innovative problems

๐Ÿ“Š Dataset Release Status

DomainDescriptionStatus
Physics500 university/competition-level physics problems (400 static + 100 dynamic parametric variants) covering 10+ fields from classical mechanics to modern physicsโœ… Released
ActuaryCurated actuarial exam problems covering core topics: probability statistics, financial mathematics, life/non-life insurance, actuarial models, and risk managementโœ… Released
LogicHigh-differentiation logical reasoning problems from authoritative tests (LSAT/GMAT/GRE/SBI/Chinese Civil Service Exam)๐Ÿ”„ In Preparation
PsychologyPsychological case studies and research questions (objective/subjective) evaluating understanding of human behavior and theories๐Ÿ”„ In Preparation
LawAuthoritative judicial exam materials covering core legal domains: criminal/civil/administrative/procedural/international law๐Ÿ”„ In Preparation