RequestUpdated on 24 February 2026
New mechanisms for controllability and attention understanding in AI systems
About
Multimodal Large Language Models rely on attention mechanisms as a core component of Transformer architectures, yet attention remains largely implicit, limiting controllability, interpretability, and higher-level reasoning. Building on the EIC Pathfinder project ASTOUND and inspired by Theory of Mind and Attention Schema Theory, this project aims to make attention an explicit, modellable, and steerable process. The research will investigate a hierarchical two-level framework that leverages principles from flow- and diffusion-inspired generative modelling to predict and regulate attention dynamics in state-of-the-art pre-trained Transformers. By enabling deliberate control over attention allocation, the proposed approach seeks to enhance reasoning and planning capabilities while improving explainability, contributing to the development of more transparent, trustworthy, and cognitively grounded AI systems.
Attached files
Organisation
Similar opportunities
Project cooperation
- MSCA-POSTDOCTORAL FELLOWSHIPS
- POSTDOCTORAL FELLOWSHIP: Looking for Fellow
Kairi Herik
Senior Specialist for Personnel Recruitment at University of Tartu
Tartu, Estonia
Request
Beyond RAG and Vector-based memories for Active Learning
Luis Fernando D'Haro
Associate Professor at Universidad Politécnica de Madrid
Madrid, Spain
Expertise
Probabilistic Machine Learning and Generalization Theory in Modern Deep Learning
- PHY - Physics
- MAT - Mathematics
- ENG - Information Science and Engineering
- STAFF EXCHANGES: Beneficiary / Associated Partner
- DOCTORAL NETWORKS: Hosting Doctoral Candidates / Secondments / Trainings
- POSTDOCTORAL FELLOWSHIPS: Hosting Postdoctoral Candidates / Secondments / Placements
ANDRES MASEGOSA ARREDONDO
Associate Professor at Aalborg University
Copenhagen, Denmark