Multi-objective problems occur in all aspects of life; knowing how to solve them is crucial for accurate modelling of the real world. Rapid progress is being made in adapting traditional machine learning paradigms to the multi-objective use case, but so far few works address the specific challenges of distributed multi-objective learning. Federated Learning is a distributed machine learning paradigm introduced to tackle problems where training data originates in distribution and cannot be shared. With recent advances in hardware and model capabilities, Federated Learning (FL) is finding ever more widespread application to problems of increasing complexity, from deployment on edge devices to the tuning of large language models. However, heterogeneity caused by differences between participants remains a fundamental challenge in application. Existing work has largely focused on mitigating two major types of heterogeneity: data and device heterogeneity. Yet as the use of FL evolves, other types of heterogeneity become relevant. In this work, we consider one such emerging heterogeneity challenge: the preference-heterogeneous setting, where each participant has multiple objectives, and heterogeneity is induced by different preferences over these objectives. We propose FedPref, the first Personalised Federated Learning algorithm designed for this setting, and empirically demonstrate that our approach yields significantly improved average client performance and adaptability compared to other heterogeneity-mitigating algorithms across different preference distributions.