Overview
The evaluation of deep learning models often involves navigating trade-offs among multiple criteria. This tutorial provides a structured overview of gradient-based multi-objective optimization (MOO) for deep learning models. We begin with the foundational theory, systematically exploring three core solution strategies: identifying a single balanced solution, finding a discrete set of Pareto optimal solutions, and learning a continuous Pareto set. We will cover their algorithmic details, convergence, and generalization. The second half of the tutorial focuses on applying MOO to Large Language Models (LLMs). We will demonstrate how MOO offers a principled framework for fine-tuning and aligning LLMs, effectively navigating trade-offs between multiple objectives. Through practical demonstrations of state-of-the-art methods, participants will gain valuable insights. The session will conclude by discussing emerging challenges and future research directions, equipping attendees to tackle multi-objective problems in their work. This tutorial is based on our survey paper Gradient-Based Multi-Objective Deep Learning: Algorithms, Theories, Applications, and Beyond.
Speakers
Schedule
1. Introduction to MOO in Deep Learning
2. Finding a Single Pareto Optimal Solution
3. Finding a Finite Set of Solutions
4. Finding an Infinite Set of Solutions
5. Theoretical Foundations
6. Applications in Deep Learning
7. Open Challenges and Future Directions
Venue
๐ Date & Time: 15:45 pm - 18:45 pm August 29th, 2025
๐ Location: Langham Place, Guangzhou, China
๐ช Room: Great Hall F
Materials
Contact
For any questions regarding the tutorial, please contact Weiyu Chen.