Federated learning is a distributed machine learning technique that aggregates every client model on the server side. There can be various types of attacks to destroy the robustness of this learning system. Model poisoning attack is realized after the training is finished. In this project, we will use a recently developed method* for analyzing the effect of model poisoning attack that might occur during training.
Federated learning is a distributed machine learning technique that aggregates every client model on the server side. There can be various types of attacks to destroy the robustness of this learning system. A recent study* introduces a low-cost approach for the server to detect these malicious models by coordinate-based statistical comparison. In this project, we will extend this method for detecting model poisoning attacks both on the clients and on the server.