Project cooperationUpdated on 5 February 2026
Secure & Trustworthy AI Systems
EU Funding Specialist at ASELSAN
ANKARA, Türkiye
About
We are seeking collaboration partners from industry, public institutions, research centers and universities to jointly develop and validate secure, robust and trustworthy AI systems for real-world, high-risk operational environments.
We are interested in organisations with expertise in:
-
Robust, adversarial and resilient machine learning methods
-
Detection of AI attacks such as data poisoning and backdoors
-
Real-time anomaly detection, secure monitoring and AI security operations
-
Data sourcing, data validation and automated data quality controls
-
Secure federated and distributed learning
-
Privacy-enhancing technologies (e.g. confidential computing, homomorphic encryption)
-
Secure execution environments and trusted AI deployment infrastructures
-
Legal, compliance and risk management expertise (AI regulation, data protection, GDPR)
-
Public authorities, critical infrastructure operators or large enterprises able to provide pilot use-cases and validation environments
-
AI transparency, model evaluation and security testing methodologies
-
Development of secure AI training curricula and educational materials
📩 Organisations interested in collaboration are welcome to connect via the platform.
Similar opportunities
Project cooperation
Looking for End-Users / Use-Case Owners (AI Project)
Fatma DURU
EU Funding Specialist at ASELSAN
ANKARA, Türkiye
Project cooperation
Hendar Mawan, PhD Eng.
R&D Director - Trustworthy edge AI & Data Governance at RIoT Secure AB
Stockholm, Sweden
Expertise
Acube – Governing AI from the Mathematical Core
- Arificial Intelligence and Data
- Business models and exploitation strategies
Fabrizio Coccetti
Project Manager at Acube srl
Teramo, Italy