ICLR 2026 Acceptance Prediction: Benchmarking with Tabular Machine Learning

Please note: These are just predictions based on data. Don't take them too seriously! We wish everyone the best of luck with their submissions. 🍀

ICLR 2026 Prediction Lookup

Enter your OpenReview submission ID to check predictions from our tabular ML models.

Ready to search...

📊 Model Performance Overview

Total Predictions
-
Accept Rate
-
Reject Rate
-

This project applies tabular machine learning models to predict ICLR paper acceptance decisions. Unlike LLM-based approaches, we focus on structured feature engineering from review scores, ratings, and metadata.

Benchmark Results (2025)

Historical accuracy of our models on the ICLR 2025 dataset.

Overall Accuracy

87.5%
Binary Classification

F1 Score

0.64
Macro Average

Model Leaderboard

CatBoost consistently outperforms baseline Logistic Regression on tabular review data, achieving higher precision in distinguishing borderline Spotlight/Poster decisions.

About This Project

This project applies tabular machine learning models to predict ICLR paper acceptance decisions. Unlike LLM-based approaches, we focus on structured feature engineering from review scores, ratings, and metadata.

📊

Tabular Data Approach

Leverages structured features from review scores, ratings, and paper metadata without relying on text content.

🤖

Multiple Models

Ensemble of CatBoost, TabPFN, Logistic Regression, and Decision Trees for robust predictions.

🎯

Feature Engineering

Advanced feature engineering including multi-value column expansion, summary statistics, and year-based deltas.

Experimental Results

Performance metrics for our tabular ML models on ICLR acceptance prediction.

Model Comparison

Select a model to view metrics

Prediction Distribution

Download Predictions

Download the complete prediction dataset with all model outputs.

📂 Open Results Folder