A major defence-sector organisation engaged the National Edge AI Hub to evaluate and strengthen the safety and security of federated learning (FL) deployed on low Size, Weight and Power (SWaP) devices used in airborne systems. As interest in distributed AI grows, so does the need to ensure that these systems can withstand adversarial threats and operate safely in environments where reliability is non‑negotiable.
Federated learning enables multiple edge devices to train AI models collaboratively without sharing sensitive data, reducing security and privacy risks. However, FL remains susceptible to attack vectors such as data poisoning, model manipulation and malicious update injection. These risks pose particular challenges in defence contexts, where adversaries may attempt to exploit system weaknesses.
The National Edge AI Hub worked closely with the organisation’s technical team to conduct a detailed threat assessment, modelling adversary capabilities and identifying vulnerabilities across hardware, software and deployment scenarios. The analysis focused specifically on distributed airborne systems operating under tight computational and power constraints.
The project included practical testing of the organisation’s existing federated learning implementation, with the Hub demonstrating attack techniques and assessing how the system responded under adversarial conditions. Based on these results, the Hub delivered actionable mitigation strategies, improved security requirements and recommendations for enhancing robustness. The engagement also involved developing updated code artefacts, documentation and demonstrations showing the impact of the improvements.
By the end of the collaboration, the organisation gained a clearer understanding of FL safety risks and a more resilient implementation suitable for mission‑critical use. The project highlights how the National Edge AI Hub helps industry partners adopt advanced edge‑AI technologies safely, offering specialist expertise in security, threat modelling and real‑world testing.