Understanding that AI must choose its teammates wisely, KU’s system ensures connected devices are trusted for large-scale federated learning across urban networks. ©AerialPerspective Images/ Moment/ Getty Images

Trust issues: How AI picks its teammates 


In a world where AI increasingly learns from our devices, a dynamic, trust-driven framework promises security, scale and reliability without risking privacy.

What if your phone could help train a global AI system, perhaps contributing to a smarter healthcare app or improving voice recognition, all without you lifting a finger. Now imagine this happening across millions of devices, all learning together while keeping your private data exactly where it belongs: with you. 

This is the promise of federated learning, a technique that allows AI to get smarter by tapping into multiple devices without ever accessing personal data directly. Instead of uploading your information to a central server, your phone trains the AI locally and shares only a summary of what it knows. 

But there’s a catch: how do we know which devices can be trusted? 

“If you allow unverified devices into the training process, one bad actor can poison the model for everyone,” explains project leader, Hadi Otrok, from the Department of Computer Science at Khalifa University. “We needed a system that could tell the difference between a reliable contributor and a malicious one, automatically.” 

Trust isn’t just nice to have. It’s the backbone of making federated learning work in the real world. 

Jamal Bentahar

Mario Chahoud from the Artificial Intelligence & Cyber Systems Research Center at Lebanese American University worked with KU researchers Jamal Bentahar and Azzam Mourad to develop a trust-driven system designed for complex, unpredictable environments such as smart cities and mobile edge networks. “It’s like choosing teammates for a group project,” says Bentahar. “You only want the ones who’ll do solid, consistent work, not those who might sabotage the outcome.” 

Their system continuously scores each device based on its past behavior, consistency and alignment with others. Smart algorithms then use the scores to select the best combination of reliable, diverse and well-placed contributors. To manage the computations at scale, the team uses containerization to create secure, lightweight virtual workspaces that can be quickly launched and shut down on any infrastructure.  

“Think of it as setting up clean, disposable workstations that devices can use only when needed,” says Mourad. “Our orchestration process ensures untrusted environments don’t compromise the learning. It’s practical, and it’s deployable today.” 

But getting it right wasn’t easy. “One of our simulations collapsed because all the clients ‘moved away’. Our location-based dataset had every device suddenly disappear,” Otrok recalls. The biggest challenge was balancing robustness and efficiency. “We had to make sure our trust mechanism could catch deception without overwhelming the system,” he says. 

Through it all, the Khalifa University team kept sight of one goal: helping AI grow not only smarter, but also wiser about who it learns from. As Bentahar puts it, “Trust isn’t just a nice-to-have. It’s the backbone of making federated learning work in the real world.” 

Reference

Chahoud, M.; Mourad, A.; Otrok, H.; Bentahar, J.; Guizani, M., Trust driven On-Demand scheme for client deployment in Federated Learning. Inf. Process. Manag., 62 (2025), 103991. | Article 

Related articles