Project cooperationUpdated on 9 March 2026
Postdoc: Reasoning in LLMs
About
I’m looking for a postdoc to work on reasoning in large language models, with an emphasis on making reasoning more transparent, reliable, and controllable.
Example project areas:
Reasoning evaluation: robust benchmarks/metrics; separating true reasoning from plausible rationalizations.
Steering & control: interventions to improve reasoning quality (prompting, decoding, training-time methods, activation/representation steering).
Faithfulness: detecting unfaithful explanations; increasing explanation–answer alignment.
You can browse my publication list here (if something overlaps with your interests, feel free to reach out):
https://yftah89.github.io/publications
Please reach out with
CV
Publication list
Brief research interest statement (1–2 paragraphs)
Successful candidates typically have publications in top ML/NLP venues (e.g., NeurIPS, ICLR, ICML, ACL, EMNLP, NAACL, AAAI, etc.).
Topic
- MSCA-POSTDOCTORAL FELLOWSHIPS
Type
- POSTDOCTORAL FELLOWSHIP: Looking for Fellow
Organisation
Similar opportunities
Project cooperation
- MSCA-POSTDOCTORAL FELLOWSHIPS
- POSTDOCTORAL FELLOWSHIP: Looking for Fellow
Yftah Ziser
Assistant professor at University of Groningen
Groningen, Netherlands
Project cooperation
Postdoc in multi-agent communication
- MSCA-POSTDOCTORAL FELLOWSHIPS
- POSTDOCTORAL FELLOWSHIP: Looking for Fellow
Yonatan Belinkov
Associate Professor at Technion - Israel Institute of Technology
Haifa, Israel
Project cooperation
Postdoc in AI interpretability, safety, control
- MSCA-POSTDOCTORAL FELLOWSHIPS
- POSTDOCTORAL FELLOWSHIP: Looking for Fellow
Yonatan Belinkov
Associate Professor at Technion - Israel Institute of Technology
Haifa, Israel