September 12, 2025

New PhD Vacancies


Members of the National Edge AI Hub are looking for self-funded PhD candidates to join exciting projects at Newcastle University, exploring the future of AI. Successful applicants will also benefit from the EPSRC National Edge Artificial Intelligence Hub, gaining access to world-class resources, collaboration opportunities, and training. Please see the 10 new PhD Vacancies below:

1.Recovery from Cyber Attacks in Cyber-Physical Systems

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Co-supervisors: Mujeeb.Ahmed@newcastle.ac.uk

Research project

This PhD focuses on the recovery from cyber attacks in Cyber-Physical Systems (CPS). This research will delve into the unique challenges and open problems associated with this critical aspect of cybersecurity. Unlike traditional IT systems, CPS integrate computational and physical components, making them inherently more complex. This complexity, coupled with the real-world impact of these systems, makes recovery from cyber attacks a challenging task. The interconnected nature of CPS means that an attack on one component can have cascading effects, complicating the recovery process.

This studentship offers a unique opportunity to contribute to a cutting-edge field of study and make significant advancements in the security of Cyber-Physical Systems.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires applicants with a strong background in computer science, electrical engineering, or a related field, and a keen interest in cybersecurity.

References

2. AI Model Migration

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

AI-based image and sensor analytics are vital for modern applications. However, AI adaptations are typically local to the model, which is usually attached to a device—for example, an MRI machine. This means that medics do not benefit from local adaptations when moving between hospitals.

This PhD will develop a silo of activity focused on model migration from device to device, in order to maintain a constant adaptation context and extract maximum value from AI techniques. The work presents many challenges, including the management of sensor flows to ensure that models are fed data in the correct sequence and no sensory inputs are missed during migration. It must also ensure that classification workflows face no interruption during the migration process.

Delivering a credible solution requires coordination between the network layer and the application layer. An additional challenge arises when models migrate to different places and must later merge again. Beyond preserving adaptability, migration also offers significant benefits such as minimising energy consumption, avoiding redundant re-learning at different sites, and enhancing overall system performance.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in computer science, electrical engineering, or a related field, with a particular interest in AI systems, distributed computing, and networked systems.

References

3. Global Scale Digital Forensics for CyberPhysical Systems

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Co-supervisors: Mujeeb.Ahmed@newcastle.ac.uk

Research project

Forensics in cyber-physical systems (CPS) often takes place in a local context. However, detecting attacks, protecting systems, recovering operations, and identifying attackers requires cooperation across multiple defenders spanning both the CPS network and the core infrastructure.

Internet Service Provider (ISP) networks have visibility into 60–70% of network traffic, but lack the intelligence or local CPS context required to attribute attacks or detect stepping-stone behaviours. This project aims to develop methods and tools to address the critical challenges of visibility, scale, privacy, false positives, and cooperation in CPS forensics.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in computer science, cybersecurity, or a related field, with an interest in digital forensics, large-scale systems, and cyber-physical security.

References

4. AI/LLMs in Cybersecurity

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

This topic focuses on the exploration of Language Learning Models (LLMs) in the field of cybersecurity. The research will examine the dual nature of LLMs, investigating both their positive contributions and their potential threats to security and privacy, while addressing open challenges in this area.

On the positive side, LLMs have demonstrated strong potential in enhancing code and data security, outperforming traditional approaches in tasks such as secure coding, test case generation, vulnerable code detection, malicious code detection, and automated code fixing. They have also been used to support data integrity, confidentiality, reliability, and traceability.

Conversely, LLMs also enable offensive applications, spanning hardware-level, OS-level, software-level, network-level, and user-level attacks. This studentship offers a unique opportunity to contribute to a cutting-edge field of study and to advance both the defensive and adversarial understanding of LLMs in cybersecurity.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in AI, machine learning, or computer science, with a particular interest in LLMs and cybersecurity.

References

5. Autonomous Adversarial Planning in Marine Environments

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

Sending a team of robots on a mission is easy. Getting them to succeed when things inevitably go wrong is the hard part. The real world—especially the marine environment in the presence of hostile actors—throws unpredictable weather, equipment failures, and unexpected obstacles at you. A plan that cannot adapt is a plan that will fail.

This PhD project tackles the fundamental control problem for a heterogeneous fleet of autonomous vehicles (UAVs, USVs, UUVs): not just how to devise a clever initial plan, but how to build a system that can replan intelligently and robustly under adversarial pressure to ensure the mission goal is still achieved. The aim is to develop robust planning and enactment methodologies for heterogeneous multi-robot teams operating in hostile and unpredictable environments. The goal is to ensure mission success through adaptive real-time control, enabling efficient data collection and high-quality environmental surveys even when things go wrong.

The problem reduces to two core questions, framing the mission controller as a system under attack. First, the initial plan: how can resources be marshalled most effectively, knowing the environment will attempt to degrade them? This involves developing planning techniques that are not only optimal for perfect conditions but also robust, with fallbacks and contingencies built in from the start. Second, the adaptive response: how does the system detect, diagnose, and respond to attacks on the plan? Here “attacks” include vehicle failures, new obstacles, or sudden environmental changes. The objective is to create strategies for real-time adaptation that are provably sound and prevent catastrophic mission failure.

Using a mix of air, surface, and underwater drones is the most effective way to survey a large, hostile area like the ocean. But this heterogeneity, while providing capability and redundancy, also massively increases the complexity of the control problem. This is not just an optimisation puzzle—it is a challenge of maintaining command and control in the face of a relentlessly adversarial environment. The research will focus on building a resilient system that can first devise a strong initial mission plan for a heterogeneous team, and second—and more importantly—dynamically replan and re-coordinate that team when it receives new, conflicting, or alarming information. Success will not be measured by how well it executes a perfect plan, but by how gracefully it handles imperfection and outright failure.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in robotics, control systems, computer science, or a related field, with an interest in autonomous systems, planning under uncertainty, and adversarial resilience.

References

6. Digital Twins for Autonomous Systems: A Question of Trust and Liability

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

Autonomous systems operating in contested environments are under constant attack, both from passive environmental noise and from active adversaries seeking to spoof, degrade, or deceive them. Relying on a single, monolithic digital twin for such a system is a profound security and engineering error: it creates a single point of failure, is too slow for real-time response, and acts as a “black box” whose failures cannot be diagnosed or attributed. When catastrophic errors occur, a monolithic model offers no clear evidence trail to distinguish between a sensor attack, a planning failure, or a flaw in the physics model.

This PhD project aims to develop a defensible, modular digital twin framework for autonomous systems that can withstand adversarial pressure and provide trustworthy, real-time assessments of system state. By explicitly tracking uncertainty and provenance, the framework will inform critical decisions and establish accountability when failures occur.

The approach is to break the digital twin into a network of smaller, linked modules, each corresponding to a physical or logical subsystem (e.g., sensor array, propulsion, navigation planner). This modularity provides key advantages: containment, where compromises or failures are isolated; attribution, where uncertainties and data provenance allow forensic diagnosis of whether a failure was due to spoofing, faults, or inference errors; and high-fidelity uncertainty tracking, ensuring that every module reports not only its output but also its confidence level.

The research will pursue three main objectives:

Modular Splitting Methodology: Develop secure methods to decompose a high-fidelity digital twin into independently executable modules, with explicit trust boundaries defining what information and uncertainties flow between them.

Bayesian Dependency Modelling: Represent module interconnections as a Bayesian network, propagating both data and uncertainty. This defence-in-depth mechanism automatically downgrades trust in modules receiving corrupted or low-confidence inputs.

Benchmarking for Real-Time Defence: Evaluate whether the modular, probabilistic architecture can update beliefs quickly enough for real-time operation under attack. Rapid assimilation of new data (e.g., ‘sensor X is under attack’) and ensuring that the system’s state belief is recomputed in a timely manner are treated as security-critical requirements.

Ultimately, this project directly addresses the command-and-control problem for autonomous systems in adversarial environments. By replacing a brittle monolithic twin with a resilient modular network, the research will yield a system that not only predicts system behaviour but also diagnoses its own compromises. The probabilistic outputs will provide an auditable record of “what the system knew and when it knew it,” forming a defensible basis for decision-making and for assigning legal and ethical liability when things go wrong.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in computer science, control systems, or a related field, with interests in autonomous systems, digital twins, probabilistic modelling, and cybersecurity.

References

7. Adversarial Underwater Sensing and Scene Reconstruction

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

Sonar is used for everything from finding shipwrecks to inspecting pipelines, but the data it produces are notoriously difficult to interpret. The core challenge is reconstructing a 3D world from noisy, biased, and ambiguous 2D echoes. This problem is further complicated by adversaries in the water who can deliberately interfere with reconstruction. The underwater environment itself adds layers of difficulty—noise, multipath reflections, and distortions—making scene reconstruction a deeply ill-posed inverse problem. Existing methods often fail because they do not properly account for the physics of how sound propagates and is captured.

This PhD project proposes a new approach: to bake the physics of sonar directly into the reconstruction process. Instead of relying on generic neural networks to learn the physics from scratch, the system will embed the rules of sound propagation into the architecture itself. The aim is to develop reconstruction techniques that are inherently aware of how sound waves are generated, reflected, and measured, treating the sonar apparatus as an “adversary” whose outputs are corrupted by noise, bias, and inaccuracies. The task is to invert this faulty rendering process to recover the truth.

The research will focus on three key objectives. First, Model the Adversary: formally define sonar acquisition as an inverse problem, with a precise forward model that simulates noise, multipath, and systemic sensor biases. Second, Build the Inversion Engine: develop physics-aware neural networks constrained by this model, integrating differentiable physics renderers that can both generate realistic sonar images and invert them to recover the most probable 3D scenes. Third, Stress-Test the System: benchmark the techniques against realistic synthetic data generated by the forward model as well as real-world sonar data, testing their robustness to noise and bias.

By embedding physics into the model, the project moves beyond blind pattern matching toward reasoned inference. This approach promises algorithms that are more robust, more data-efficient, and more generalisable to messy real-world conditions. The outcome will be a step-change in our ability to reliably interpret sonar and reconstruct underwater scenes, enabling greater resilience in environments where adversarial interference is a constant threat.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in computer science, signal processing, machine learning, or a related field, with an interest in physics-aware AI, inverse problems, and adversarial sensing.

References

8. Multi-Agent Collaboration and Reasoning

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

This PhD project aims to develop robust frameworks for multi-agent collaboration that are resilient to subversion, malicious control, and resource-based attacks. The objective is to ensure that a team of heterogeneous autonomous agents can still achieve mission goals even when some agents are compromised or actively hostile.

The core problem is that collaboration itself must be treated as a security protocol. In contested environments, communication channels may be eavesdropped on, jammed, or manipulated, and some agents may be maliciously controlled by adversaries. Naively assuming that agents can trust one another is a recipe for failure. Beyond scalability and uncertainty, the open challenges are Byzantine faults, Sybil attacks, and malicious power attacks designed to drain key resources such as battery life and bandwidth, leading to mission-level denial of service.

The proposed approach is to design collaboration systems that are defensive by default. Rather than assuming benevolent actors, frameworks will be built to be paranoid—combining knowledge-based rules for verifiable protocols and trust boundaries with adaptive, data-driven learning for anomaly detection and real-time betrayal recognition. A central principle will be explainable agency, enabling human operators to audit decisions, alongside ecological rationality to ensure robustness under adversarial pressure.

The research will pursue three objectives: (1) Map the Attack Surface, exploring vulnerabilities in current multi-agent collaboration approaches and framing them as potential adversarial entry points; (2) Develop Resilient Frameworks, combining knowledge-based security protocols with machine learning techniques for anomaly detection and adaptive response, enabling compromised agents to be isolated while the team reorganises to continue the mission; and (3) Stress-Test Under Attack, evaluating these frameworks in simulated environments with malicious actors, including tests for battery-draining power attacks, Sybil attacks, and sudden betrayals of team members.

This research is critical for defence and security scenarios where multi-agent systems are deployed, including coordinated patrolling, disaster response, and distributed sensing. In such missions, assuming a benign environment is negligent. The outcome will be a defensible framework for multi-agent collaboration—capable of resilience, adaptation, and mission success even in the presence of adversarial manipulation.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in computer science, artificial intelligence, or a related field, with interests in multi-agent systems, adversarial resilience, and explainable AI.

References

9. Auditing Structural Health Monitoring for Adversary Tolerance in Autonomous Sensing Platforms

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

This PhD project aims to develop resilient, embedded sensing systems for composite structures that can reliably localise impact damage and diagnose structural health in the face of error, mischance, and active subversion, ensuring the continued airworthiness of critical components.

The core challenge is trusting a structure you cannot see. Composite materials, while highly capable, often fail internally and invisibly. Their hidden weaknesses can be exploited by maliciously induced acoustic waves or triggered by natural mechanical stress. Current sensor systems are typically bolted on as an afterthought—a tangle of wires that create new vulnerabilities, compromise the structure, and introduce a maintenance burden prone to human error. The goal is not simply to design a sensor network, but to build a continuous audit mechanism for structural integrity: robust against noise, resilient to the very damage it monitors, and secure against both external adversaries and internal degradation.

The research will pursue four key objectives. First, Develop Trustworthy Sensors that integrate symbiotically with composites rather than weakening them, avoiding delamination or embedded flaws. Second, Build a Robust Signal Processing Pipeline that can distinguish real impacts from adversarial noise or spurious vibrations, and provide confidence estimates rather than unreliable alerts. Third, Integrate and Validate Under Adversarial Conditions, developing algorithms for impact localisation and testing them under repeated strikes, environmental degradation, and spoofing attempts. Fourth, Stress-Test the Entire System on aged, cycled, and partially compromised materials, ensuring the monitor remains trustworthy and functional even as the host structure deteriorates.

This research addresses a critical failure point in modern aviation and other high-stakes industries. A structural health monitoring (SHM) system must act as a primary defence against electro-mechanical stresses without itself becoming a security liability. By moving from external, wired sensors to embedded, continuous audit systems, the project will enable constant vigilance instead of periodic, error-prone inspections. The outcome will be a monitoring system whose failure modes are well-understood and contained, providing a trustworthy and secure foundation for decisions about airworthiness. Here, adversarial resilience is not an added feature—it is the core requirement, as the cost of failure is catastrophic.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in electrical engineering, materials science, computer science, or a related field, with interests in embedded sensing, structural health monitoring, and adversarial resilience.

References

10. New Security Controls for Decentralised Agentic AI

Supervision team

Main Supervisor: Shishir.Nagaraja@newcastle.ac.uk

Research project

We are rapidly approaching a world saturated with powerful, decentralised AI agents. These agents and their underlying models have astonishing capabilities, but our ability to actually control them—to dictate how, where, and by whom they are used—is almost non-existent. For Edge AI in particular, trust must become a mechanical property, engineered into the system from the ground up to withstand both internal and external adversaries. The goal of this project is to build the cryptographic and architectural primitives that make this possible.

The project aims to develop AI models that do more than process data: they must be able to negotiate with each other under adversarial conditions, using a new language of interaction grounded in post-quantum cryptographic proofs. A first strand of work will focus on zero-knowledge AI handshakes, where models exchange data and capabilities only after mutually verifying each other’s credentials and intentions through zero-knowledge proofs (ZKPs). This allows them to prove entitlement to knowledge without revealing the secret itself, preventing unauthorised access and limiting information leakage. A second strand will develop threshold classifiers, where no single entity holds unilateral control. Instead, classification capability requires consensus—for example, three out of five nodes must cryptographically sign and unlock an action. This reduces the risk that a single compromised agent could trigger malicious outcomes.

Further research will explore access-controlled Edge AI, in which model functionality is not fixed but unlocked dynamically via authorisation keys. This allows a single deployed model to vary its capabilities instantly based on the credentials of the user, providing a direct mechanism for privilege management, rapid revocation, and containment of compromised entities. Alongside this, the project will embed end-to-end auditable workflows, incorporating verifiable confidentiality and tamper-evident audit trails directly into the AI’s operation. Every decision and interaction will generate a forensic record, providing both accountability and a mechanism for assigning liability when failures occur. Finally, the project will investigate deniable AI models, which deliberately obscure the provenance of training data. These “fogged” models protect privacy and act as a defensive measure against model inversion and membership inference attacks, ensuring adversaries cannot prove what data the model was trained on.

This research moves beyond building smarter AI to building civilised AI: systems whose power is tempered by verifiable constraints, whose actions can be audited, and whose failures can be contained and attributed. The outcome will be a framework ensuring that the emerging Agentic AI revolution is secure, accountable, and resilient by design, not by accident.

This PhD project is open to self-funded students. Successful candidates will gain access to the EPSRC National Edge Artificial Intelligence Hub, which “will deliver world-class fundamental research, co-created with stakeholders from other disciplines and regions, to protect the quality of data and quality of learning associated with Artificial Intelligence (AI) algorithms when they are subjected to cyber attacks in the Edge Computing (EC) environments.” The Hub provides solutions and support for edge AI, “fostering engagement, education, collaboration, and innovation across industries.” Candidates will also benefit from Hub resources and events through platforms such as Edge AI Engage and Edge AI Educate.

Applicant skills/background

This project requires a strong background in computer science, AI, or cybersecurity, with particular interest in cryptography, distributed systems, and secure AI design.

References