Abstract:Federated learning is an important method for addressing two critical challenges in machine learning: data sharing and privacy protection. However, federated learning itself faces challenges related to data heterogeneity and model heterogeneity. Existing research often focuses on addressing one of these issues, overlooking the correlation between them. To address this, this paper introduces a framework named PFKD. This framework utilizes knowledge distillation techniques to address model heterogeneity and personalized algorithms to tackle data heterogeneity, thereby achieving more personalized federated learning. Experimental analysis validates the effectiveness of the pro-posed framework. The experimental results demonstrate that the framework can overcome model performance bottlenecks, improving model accuracy by approximately 1 percentage point. Furthermore, with the appropriate hyperparameter adjustments, the framework's performance is further enhanced, resulting in an increase in model accuracy by approximately 2 percentage points.