Dr. Grégoire Danoy is a Research Scientist at the University of Luxembourg (UL) and Head of the Parallel Computing and Optimisation Group (PCOG).
His research focuses on optimization, swarm intelligence, and machine learning, with applications in unmanned autonomous systems in air and space, cloud computing, high-performance computing, smart and sustainable mobility.
Authorisation to direct research (ADR), 2019
University of Luxembourg, Luxembourg
PhD in Computer Science, 2008
Ecole des Mines of Saint-Etienne, France
Master in Computer Science, 2004
Ecole des Mines of Saint-Etienne, France
Industrial Engineer degree in Computer Science, 2003
Luxembourg University of Applied Sciences (IST)
* Research in Artificial Intelligence, Optimisation, Smart Cities, Vehicular Networks
* Research funding acquisition: FNR (Luxembourg), Eureka-Celtic (EU)
* Project management: Work package and task leader
* Daily advisory to doctoral and postdoctoral researchers
* Reviewer for international scientific conferences and journals
* Teaching optimisation at Bachelor and Master levels
* Initiator/Leader of the Dafo project, a distributed multi-agent framework for business problems optimisation
* Teaching UML and algorithmics at Bachelor and Master levels
Research Projects
Technology Transfer Projects
Institutional Responsibilities:
Editorial Boards:
Evaluation Committees:
Conferences:
Technical Program Committee Member:
Journals Reviewer:
Past and current teaching activities at University of Luxembourg
Education Management:
Bachelor Level:
Industrial Engineering Level:
Master Level:
PhD level
Supervisor
Advisor
Master & Bachelor level
Bachelor Level
Master level
Participation to PhD Boards
The Federated Learning paradigm is a distributed machine learning strategy, developed for settings where training data is owned by distributed devices and cannot be shared with others. Federated Learning circumvents this constraint by carrying out model training in distribution, so that each participant, or client, trains a local model only on its own data. The parameters of these local models are shared intermittently among participants and aggregated to enhance model accuracy. This strategy has shown impressive success, and has been rapidly adopted by the industry in efforts to overcome confidentiality and resource constraints in model training. However, the application of FL to real-world settings brings additional challenges, many associated with heterogeneity between participants. Research into mitigating these difficulties in Federated Learning has largely focused on only two particular types of heterogeneity: the unbalanced distribution of training data, and differences in client resources. Yet many more types of heterogeneity exist, and some are becoming increasingly relevant as the capability of FL expands to cover more and more complex real-world problems, from the tuning of large language models to enabling machine learning on edge devices. In this work, we discuss a novel type of heterogeneity that is likely to become increasingly relevant in future applications: this is preference heterogeneity, emerging when clients learn under multiple objectives, with different importance assigned to each objective on different clients. In this work, we discuss the implications of this type of heterogeneity and propose a FedPref, a first algorithm designed to facilitate personalised federated learning in this setting. We demonstrate the effectiveness of the algorithm across several different problems, preference distributions and model architectures. In addition, we introduce a new analytical point of view, based on multi-objective metrics, for evaluating the performance of federated algorithms in this setting beyond the traditional client-focused metrics. We perform a second experimental analysis based in this view, and show that FedPref outperforms compared algorithms.
Multi-objective reinforcement learning (MORL) extends traditional RL by seeking policies making different compromises among conflicting objectives. The recent surge of interest in MORL has led to diverse studies and solving methods, often drawing from existing knowledge in multi-objective optimization based on decomposition (MOO/D). Yet, a clear categorization based on both RL and MOO/D is lacking in the existing literature. Consequently, MORL researchers face difficulties when trying to classify contributions within a broader context due to the absence of a standardized taxonomy. To tackle such an issue, this paper introduces multi-objective reinforcement learning based on decomposition (MORL/D), a novel methodology bridging the literature of RL and MOO. A comprehensive taxonomy for MORL/D is presented, providing a structured foundation for categorizing existing and potential MORL works. The introduced taxonomy is then used to scrutinize MORL research, enhancing clarity and conciseness through well-defined categorization. Moreover, a flexible framework derived from the taxonomy is introduced. This framework accommodates diverse instantiations using tools from both RL and MOO/D. Its versatility is demonstrated by implementing it in different configurations and assessing it on contrasting benchmark problems. Results indicate MORL/D instantiations achieve comparable performance to current state-of-the-art approaches on the studied problems. By presenting the taxonomy and framework, this paper offers a comprehensive perspective and a unified vocabulary for MORL. This not only facilitates the identification of algorithmic contributions but also lays the groundwork for novel research avenues in MORL.
Multi-objective problems occur in all aspects of life; knowing how to solve them is crucial for accurate modelling of the real world. Rapid progress is being made in adapting traditional machine learning paradigms to the multi-objective use case, but so far few works address the specific challenges of distributed multi-objective learning. Federated Learning is a distributed machine learning paradigm introduced to tackle problems where training data originates in distribution and cannot be shared. With recent advances in hardware and model capabilities, Federated Learning (FL) is finding ever more widespread application to problems of increasing complexity, from deployment on edge devices to the tuning of large language models. However, heterogeneity caused by differences between participants remains a fundamental challenge in application. Existing work has largely focused on mitigating two major types of heterogeneity: data and device heterogeneity. Yet as the use of FL evolves, other types of heterogeneity become relevant. In this work, we consider one such emerging heterogeneity challenge: the preference-heterogeneous setting, where each participant has multiple objectives, and heterogeneity is induced by different preferences over these objectives. We propose FedPref, the first Personalised Federated Learning algorithm designed for this setting, and empirically demonstrate that our approach yields significantly improved average client performance and adaptability compared to other heterogeneity-mitigating algorithms across different preference distributions.