Radiance
Register
Register
Register

Project cooperationUpdated on 9 March 2026

Postdoc in LLM Alignment

Yftah Ziser

Assistant professor at University of Groningen

Groningen, Netherlands

About

I’m looking for a postdoc to work on LLM alignment, with an emphasis on making models safer, more faithful, and fairer.

Example project areas:

Faithfulness & hallucinations: detection/mitigation, reliability evaluation, and data contamination analysis.

Safety & controllability: safety steering, guardrails, robust refusal/compliance behavior.

Fairness & robustness: reducing disparate behaviors across groups, domains, and languages (incl. multilingual/low-resource).

You can browse my publication list here (if something overlaps with your interests, feel free to reach out):

https://yftah89.github.io/publications

Please reach out with

CV

Publication list

Brief research interest statement (1–2 paragraphs)

Successful candidates typically have publications in top ML/NLP venues (e.g., NeurIPS, ICLR, ICML, ACL, EMNLP, NAACL, AAAI, etc.).

Topic

  • MSCA-POSTDOCTORAL FELLOWSHIPS

Type

  • POSTDOCTORAL FELLOWSHIP: Looking for Fellow

Organisation

University of Groningen

University

Groningen, Netherlands

Similar opportunities

  • Project cooperation

    Postdoc: Reasoning in LLMs

    • MSCA-POSTDOCTORAL FELLOWSHIPS
    • POSTDOCTORAL FELLOWSHIP: Looking for Fellow

    Yftah Ziser

    Assistant professor at University of Groningen

    Groningen, Netherlands

  • Project cooperation

    Postdoc in multi-agent communication

    • MSCA-POSTDOCTORAL FELLOWSHIPS
    • POSTDOCTORAL FELLOWSHIP: Looking for Fellow

    Yonatan Belinkov

    Associate Professor at Technion - Israel Institute of Technology

    Haifa, Israel

  • Project cooperation

    Postdoc in AI interpretability, safety, control

    • MSCA-POSTDOCTORAL FELLOWSHIPS
    • POSTDOCTORAL FELLOWSHIP: Looking for Fellow

    Yonatan Belinkov

    Associate Professor at Technion - Israel Institute of Technology

    Haifa, Israel