Leco Li's picture
Building on HF

Leco Li PRO

imnotkitty

AI & ML interests

None yet

Recent Activity

reacted to mayafree's post with 🚀 about 5 hours ago
Leaderboard of Leaderboards — A Real-Time Meta-Ranking of AI Benchmarks https://huggingface.co/spaces/MAYA-AI/all-leaderboard Hundreds of AI leaderboards exist on HuggingFace. Knowing which ones the community actually trusts has never been easy — until now. Leaderboard of Leaderboards (LoL) ranks the leaderboards themselves, using live HuggingFace trending scores and cumulative likes as the signal. No editorial curation. No manual selection. Just what the global AI research community is actually visiting and endorsing, surfaced in real time. Sort by trending to see what is capturing attention right now, or by likes to see what has built lasting credibility over time. Nine domain filters let you zero in on what matters most to your work, and every entry shows both its rank within this collection and its real-time global rank across all HuggingFace Spaces. The collection spans well-established standards like Open LLM Leaderboard, Chatbot Arena, MTEB, and BigCodeBench alongside frameworks worth watching. FINAL Bench targets AGI-level evaluation across 100 tasks in 15 domains and recently reached the global top 5 in HuggingFace dataset rankings. Smol AI WorldCup runs tournament-format competitions for sub-8B models scored via FINAL Bench criteria. ALL Bench aggregates results across frameworks into a unified ranking that resists the overfitting risks of any single standard. The deeper purpose is not convenience. It is transparency. How we measure AI matters as much as the AI we measure.
reacted to SeaWolf-AI's post with 🚀 8 days ago
ALL Bench — Global AI Model Unified Leaderboard https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard If you've ever tried to compare GPT-5.2 and Claude Opus 4.6 side by side, you've probably hit the same wall: the official Hugging Face leaderboard only tracks open-source models, so the most widely used AI systems simply aren't there. ALL Bench fixes that by bringing closed-source models, open-weight models, and — uniquely — all four teams under South Korea's national sovereign AI program into a single leaderboard. Thirty-one frontier models, one consistent scoring scale. Scoring works differently here too. Most leaderboards skip benchmarks a model hasn't submitted, which lets models game their ranking by withholding results. ALL Bench treats every missing entry as zero and divides by ten, so there's no advantage in hiding your weak spots. The ten core benchmarks span reasoning (GPQA Diamond, AIME 2025, HLE, ARC-AGI-2), coding (SWE-bench Verified, LiveCodeBench), and instruction-following (IFEval, BFCL). The standout is FINAL Bench — the world's only benchmark measuring whether a model can catch and correct its own mistakes. It reached rank five in global dataset popularity on Hugging Face in February 2026 and has been covered by Seoul Shinmun, Asia Economy, IT Chosun, and Behind. Nine interactive charts let you explore everything from composite score rankings and a full heatmap to an open-vs-closed scatter plot. Operational metrics like context window, output speed, and pricing are included alongside benchmark scores. All data is sourced from Artificial Analysis Intelligence Index v4.0, arXiv technical reports, Chatbot Arena ELO ratings, and the Korean Ministry of Science and ICT's official evaluation results. Updates monthly.
View all activity

Organizations

Blog-explorers's profile picture