The Role and Potential of Explainability in Interactive Multiobjective Optimization
Please login to view abstract download link
Real-world decision-making often involves balancing conflicting objectives, which can be modeled as multiobjective optimization problems. Instead of a single optima, these problems yield multiple Pareto optimal solutions, none strictly better than another. Because Pareto optimal solutions cannot be fully ordered based on mathematics alone, it is up to a decision maker to choose the most preferred one as the best and final solution. To support decision makers in this process, various multiobjective optimization methods exist---classified as a priori, a posteriori, or interactive depending on when preference information is incorporated. We will focus on interactive multiobjective optimization methods, which enable decision makers to iteratively explore solutions, express preferences, and learn about both the problem and their own goals. Despite their strengths, existing interactive methods often lack transparency, making it difficult for decision makers to understand how their preferences shape the solutions. This opaque nature can limit trust, hinder preference articulation, and complicate the justification of decisions to others. To address these issues, the work covered in my PhD thesis introduces new ways to enhance interactive methods with explainability, drawing from the field of explainable artificial intelligence. Specifically, the methods R-XIMO and XLEMOO are showcased. R-XIMO focuses on guiding preference articulation and helping decision makers understand why specific solutions are proposed. XLEMOO employs rule-based models to explain the structure and trade-offs of preferred solutions. These methods have been implemented and made openly available through DESDEO, an open-source software framework for interactive multiobjective optimization integral to this work. DESDEO provides a flexible platform for experimentation, extension, and adoption by both researchers and practitioners. By integrating explainability into interactive multiobjective optimization, the presented work lays the foundation for the emerging field of explainable interactive multiobjective optimization. Ultimately, my PhD can lead to improved transparency, trust, and decision-support in helping decision makers solve complex real-world problems.