Project cooperationUpdated on 5 February 2026
Secure & Trustworthy AI Systems
About
We are seeking collaboration partners from industry, public institutions, research centers and universities to jointly develop and validate secure, robust and trustworthy AI systems for real-world, high-risk operational environments.
We are interested in organisations with expertise in:
-
Robust, adversarial and resilient machine learning methods
-
Detection of AI attacks such as data poisoning and backdoors
-
Real-time anomaly detection, secure monitoring and AI security operations
-
Data sourcing, data validation and automated data quality controls
-
Secure federated and distributed learning
-
Privacy-enhancing technologies (e.g. confidential computing, homomorphic encryption)
-
Secure execution environments and trusted AI deployment infrastructures
-
Legal, compliance and risk management expertise (AI regulation, data protection, GDPR)
-
Public authorities, critical infrastructure operators or large enterprises able to provide pilot use-cases and validation environments
-
AI transparency, model evaluation and security testing methodologies
-
Development of secure AI training curricula and educational materials
📩 Organisations interested in collaboration are welcome to connect via the platform.
Similar opportunities
Project cooperation
Looking for End-Users / Use-Case Owners (AI Project)
Fatma DURU
EU Funding Specialist at ASELSAN
ANKARA, Türkiye
Project cooperation
AI ACT Compliance Suite - AICS
- Planning
- Technical
Angelos Tsioris
Business Consultant at ITMC A.E. Management Consultants
Athens, Greece
Project cooperation
HORIZON-CL4-2026-04-DIGITAL-EMERGING-19
Ricardo Margalho
CEO at Stratio
Coimbra, Portugal