About The Workshop
This workshop explores the growing capabilities of large language models (LLMs), such as OpenAI's o1 model, in reasoning, planning, and decision-making, highlighting recent advances and challenges. We aim to examine how reinforcement learning methods, post-training optimization, and efficient inference techniques can further enhance LLMs' reasoning capabilities. Topics include training approach for enhancing reasoning and planning abilities, scaling inference for complex tasks, developing robust benchmarks, and extending LLMs to multi-modal and embodied environments. We will also discuss broader themes such as causal reasoning, collaborative multi-agent systems, uncertainty, and explainability to offer insights and guidance for the further development of reasoning and planning in LLMs.
Topics
The workshop will cover a range of topics, including but not limited to:
We will explore the application of RL algorithms and other effective approaches in enhancing LLM reasoning and planning abilities during both pre-training and post-training stages. We will examine how techniques like Reinforcement Learning from Human Feedback (RLHF) can be adapted and expanded for efficient reasoning. Key questions include:
- How can RL and other effective methods be utilized in pre-training to improve reasoning abilities?
- What post-training approaches (e.g., fine-tuning, RLHF) are most effective for LLM planning tasks?
- How can synthetic data generation and self-supervised training enhance LLM reasoning and planning?
We will discuss challenges and innovations in scaling up reasoning during inference. As models become larger and tasks more complex, efficient inference mechanisms are critical. Topics of interest include:
- What are the most promising methods for scaling inference times in reasoning-heavy tasks?
- How can models dynamically allocate resources during inference to optimize for reasoning and planning?
Developing robust benchmarks for evaluating reasoning and planning in LLMs is critical to track progress. This session will address the need for new metrics and standardized tasks to assess reasoning abilities across different scenarios. Key discussions will include:
- What benchmarks can accurately reflect the reasoning and planning capabilities of LLMs?
- How do we design tasks that evaluate long-horizon reasoning and complex decision-making?
As LLMs increasingly integrate with multi-modal environments, reasoning across multiple data types (e.g., vision, sound, text) becomes more essential. This session will explore the application of reasoning and planning in multi-modality and embodied AI systems, including robotics and real-world interactions:
- How can LLMs enhance multi-modal reasoning and planning to better interact with diverse environments?
- What are the key challenges and opportunities in applying LLMs to multi-modal tasks, including those requiring embodied reasoning?
In addition to the core themes mentioned above, our discussions will also encompass a broader range of emerging topics, including:
- Causal Reasoning: How can LLMs move beyond pattern recognition to infer causal relationships?
- Collaborative Reasoning in Multi-Agent Systems: How can LLMs enable multi-agent cooperation for distributed tasks?
- Uncertainty and Robustness: How can LLMs improve reasoning under ambiguous information?
- Human-in-the-Loop Systems: How can human feedback refine LLM decision-making processes?
- Explainability: How can we make LLM reasoning and planning more transparent and interpretable for real-world applications?
Call For Papers
The Reasoning and Planning for LLMs@ICLR 2025 invites submissions on the development of novel architectures, algorithms, theoretical analyses, empirical studies, and applications in reasoning and planning with LLMs. Submissions must present original, unpublished research.
Key Dates
- Paper Deadline: February 2, 2025 (AOE)
- Notification: March 3, 2025, (AOE)
- Camera-ready: April 3, 2025
Submission Site
Submissions will be managed via OpenReview. Papers will remain private during the review process. All authors must maintain up-to-date OpenReview profiles to ensure proper conflict-of-interest management and paper matching. Incomplete profiles may result in desk rejection.
Learn how to create an OpenReview profile here.
Submit papers through the Reasoning and Planning for LLMs Workshop Submission Portal on OpenReview (Reasoning and Planning for LLMs Workshop Submission Portal
).
Scope
We welcome contributions across a broad spectrum of topics, including but not limited to:- Training methodologies for enhancing reasoning and planning in LLMs
- Efficient inference for complex reasoning tasks
- Benchmarking reasoning and planning capabilities
- Multi-modality and embodiment in LLMs
- Emerging trends in LLM reasoning and planning
Submission Guidelines
Formatting Requirements
Submissions must be in English and follow the Reasoning and Planning for LLMs Workshop LaTeX Template (adapted from the ICLR 2025 template).Papers must be submitted as a single PDF file:
- Long Papers: at most 9 pages (main text)
- Tiny Papers: between 2 and 4 pages (main text)
- References and appendices are not included in the page limit, but the main text must be self-contained. Reviewers are not required to read beyond the main text.
Submissions exceeding the page limit will be desk rejected.
Anonymity
The workshop follows a double-blind review process. Submissions must be anonymized by removing author names, affiliations, and acknowledgments. Prior work should be cited in the third person. Identifying information, including in supplementary materials, must be omitted.Dual Submission and Non-Archival Policy
Submissions under review at other venues will be accepted, provided they do not breach any dual-submission or anonymity policies of those venues. Submissions will not be indexed or have archival proceedings. We welcome ICML 24 or ACL 24 submissions.Transparency
By submitting to the Reasoning and Planning for LLMs Workshop, authors agree that for all accepted papers, the original submission, reviews, and meta-reviews will be made publicly available on OpenReview.Contact
Email at zhiyuanhucs@gmail.comSpeakers and Panelists (Tentative)
Organizers
This workshop is organized by