Director of Institute: Professor Rajiv Ranjan
Newcastle University has established the Centre for AI Safety (CAIS), a ground-breaking new research centre that will tackle one of the most critical challenges of our time – ensuring artificial intelligence systems operate safely, ethically, and reliably across all sectors of society.
Leading the Global Response to AI Safety
The Centre for AI Safety represents Newcastle University’s bold response to the rapid advancement of AI technology and the urgent need to address the economic and social risks these developments pose. As AI systems become increasingly autonomous and powerful, the need for dedicated research into their safe deployment has never been more critical.
Professor Rajiv Ranjan, Director of the National Edge AI Hub and the newly appointed leader of CAIS, said: “We are at a technological inflexion point. The establishment of the Centre for AI Safety at Newcastle represents a bold step toward addressing one of the most critical challenges of our time. This initiative will not only enhance the university’s research excellence and reputation, but also contribute to ensuring that the benefits of AI are realised safely, ethically, and equitably, both nationally and globally.”
A Unique Focus on Real-World Safety
Unlike existing data science and cybersecurity initiatives, CAIS takes a distinctly different approach. While data science focuses on extracting insights from information and cybersecurity aims to protect systems and networks, AI Safety is specifically about designing and building autonomous decision-making systems that can operate safely, ethically, and reliably in real-world environments – particularly in high-stakes domains like healthcare, where safety risks are life-critical.
The Centre will address safety risks across all levels of AI systems, from the development of large foundational models to their deployment in real-world industrial and societal contexts, including:
- Digital twins for crisis resilience and mitigation
- Autonomous vehicle safety systems
- Fraud detection systems
- Smart agriculture applications
- Smart manufacturing processes
- Energy security systems
Cross-Faculty Excellence
CAIS brings together expertise from across Newcastle University’s three faculties: Science, Agriculture and Engineering (SAgE), Faculty of Medical Sciences (FMS), and Humanities and Social Sciences (HaSS). This interdisciplinary approach ensures that AI safety research addresses technical, ethical, medical, and social dimensions simultaneously.
The Centre will deliver impact through seven dynamic work packages:
- Public engagement, education, and workforce development – fostering societal understanding of AI safety and preparing the next generation of leaders
- Interdisciplinary grant applications and funding – securing collaborative research funding from UKRI, European Commission, and industry partners
- Impact and policy influence evaluation – developing frameworks to measure AI safety outcomes and contribute to policymaking
- Scientific excellence in AI Safety for critical systems – developing robust, scalable, and safe AI systems for critical applications
- Ethical AI and socio-technical governance – addressing fairness, accountability, and transparency in AI systems
- AI in decision-making for sustainable development – leveraging AI for safe societal development
- Human-centric AI and interaction design – designing AI systems that prioritise human safety and well-being
Strategic Partnerships and Investment
The Centre has already secured significant external support and partnerships that will drive its research forward:
Lenovo Partnership: Lenovo has agreed in principle to establish a Joint National AI Safety Lab at Newcastle University, providing state-of-the-art AI hardware with a minimum investment of £200,000. This lab will enable high-impact, contracted research projects combining Lenovo’s cutting-edge technology with Newcastle’s world-leading expertise in AI safety.
Singapore Design University Collaboration: A Newcastle-SDU Joint Centre in Trusted and Safe AI has been agreed in principle, which will facilitate contracted research projects across Southeast Asia and strengthen Newcastle’s position as a global leader in safe and trustworthy AI.
National Edge AI Hub Synergies: The Centre will leverage the resources, expertise, and partnerships developed through the National Edge AI Hub, which provides approximately £200,000 per year in in-kind support. Newcastle researchers will have access to £2.5 million in flexible funding currently held by the Hub, along with a national collaboration network of 12 universities and 60 industry partners.
Addressing a Global Priority
AI safety has become a global priority, with countries around the world establishing dedicated institutes and investing heavily in this critical area. The UK Government has launched the AI Security Institute (AISI), and similar initiatives exist in the US, Japan, Europe, and Australia. Newcastle’s Centre for AI Safety will be uniquely positioned as the first university-based AI Safety Centre with a holistic focus on preventing harm by ensuring the safe operation of AI systems.
Looking to the Future
The Centre’s five-year execution plan reflects a commitment to delivering impact through strategic collaboration, research excellence, and practical implementation. Key development milestones include expanding interdisciplinary research collaborations, developing new educational programmes in AI safety, and establishing Newcastle as a national testbed for safe AI deployment in healthcare and public services.
As AI continues to transform every aspect of society, the Centre for AI Safety will ensure that Newcastle University remains at the forefront of efforts to harness the benefits of artificial intelligence while minimising risks and ensuring these powerful technologies serve humanity safely and ethically.
The Centre for AI Safety joins Newcastle University’s portfolio of 19 Centres of Research Excellence (NUCoREs), enabling the University to offer a coherent and distinctive narrative of collective excellence in this critical area of research, education, and engagement with global reach.
To express interest in contributing to the Centre for AI Safety, please see this form.