LLMs for Optimization: Modeling, Solving, and Validating with Generative AI

Optimization is a foundational pillar of artificial intelligence (AI), underpinning core techniques in planning, scheduling, decision-making, and machine learning. Yet despite decades of algorithmic advances, widespread adoption of state-of-the-art optimization solvers remains limited by the substantial expertise required for effective modeling and solving. This expertise barrier means that powerful optimization tools remain largely inaccessible to non-experts, with most users of leading solvers holding advanced degrees.

Recent advances in generative AI, particularly large language models (LLMs), offer a promising new path for democratizing optimization. By automating key steps in the optimization pipeline – from model formulation through solver configuration to model validation – LLMs promise to broaden access to powerful optimization tools. However, these models rarely work out of the box for complex reasoning tasks like optimization.

This tutorial surveys emerging research on LLMs for mathematical optimization, highlighting both practical systems and open research questions. We will provide a comprehensive overview of how LLMs can support each stage of the optimization pipeline, including model formulation, solver configuration, and validation. The tutorial is designed to be accessible to attendees without prior experience in either field, offering both conceptual frameworks and practical insights for this rapidly evolving area of research.

Tutorial

AAAI 2026 Schedule — January 20th, 2026

Section Speaker Duration Slides
Introduction to Optimization and LLMs Connor Lawless 30 minutes Slides
Model Formulation Connor Lawless 90 minutes Slides
Break 30 minutes
Model Solving Ellen Vitercik 60 minutes Slides
Model Validation and Open Questions Connor Lawless 30 minutes Slides


About Us