Theme Leaders: Dr Varun Ojha & Dr Yang Long
Researchers: Prof Dhaval Thakker and Prof Xianghua Xie
RT4 aims to find ways to protect data and AI algorithms from problems caused by cyber issues in edge computing systems. We’ll use techniques like generative adversarial training to fix errors in data and keep AI models strong even when there are problems.
Building on what we learn from RT2 and RT3, we’ll develop AI models that can guard against both known and unknown cyber problems. This means tackling challenges like monitoring the impact of cyber issues on data and AI models, recovering from unknown problems, and making sure our solutions work on all kinds of edge devices.
Our first goal is to create ways to continuously monitor edge computing systems, so we can quickly respond to problems. This is tricky because edge systems can be complex and have limited resources. We’ll use what we learn from RT2 to figure out what we should monitor and make sure our monitoring tools can handle different kinds of systems. Next, will work on developing flexible and community-driven methods to protect data and models from cyber problems. We will use techniques like reinforcement learning, which lets AI models learn and adapt over time. This will involve getting feedback from experts and updating our protection methods as new problems arise. Finally, we will use what we have learned from WS3 to adapt our protection methods for edge computing environments. This means making sure our AI models can work well on different kinds of edge devices and networks.
The Vision of RT4:
The vision of RT4 (AI Data and Model Quality) is to establish research directions for developing fundamental concepts and techniques that can guard the data and AI algorithm learning quality against cyber-disturbances impacting EC architectures.
This theme sets out the following: Challenges / Research Aims:
- Monitoring of Data/Model Quality
How to monitor cyber-disturbances impact on the quality of data, AI algorithms learning and the overall application resilience?
- Recovery of Data/Model Quality
How to ensure the recovery of data and AI model quality that are impacted by cyber-disturbances and ensure suitability for AI model deployment on devices at Tiers 1, 2 of EC architectures?
- Assurance of Continuity of Data Quality and Model Quality
How to assure AI algorithms continually adapt to EC environments where unknown cyber-disturbances that were not present in the original training dataset?

Figure 1: Overview of RT4 (AI Data and Model Quality)

Figure 2: Data poisoning adversarial attacks on deep learning algorithms (more on https://doi.org/10.1016/j.artint.2023.104060)

Figure 3: Quality of Graph Transformer: An experiment with Low-rank and global-representation-key-based attention for graph transformer (more on https://doi.org/10.1016/j.ins.2023.119108)

Figure 4: Alignment approach for deep learning model quality improvement (https://doi.org/10.1109/TPAMI.2024.3487631)

Figure 5: Alignment approach for deep learning model quality improvement

Animation 1: Deep learning for automated river-level monitoring through river-camera images. (more on https://doi.org/10.5194/hess-25-4435-2021, 2021)

Animation 2: pre-and post- buckling analyses of space truss structures in civil engineering using artificial intelligence methods