Prof. Dr. Michael Hahn - Heinz Maier-Leibnitz Prizewinner 2026

Computational Linguistics, Saarland University, Saarbrücken

Even the most advanced AI language models (large language models, LLMs) can perform poorly on logical tasks. Calculations may be incorrect, sequences misrepresented, and the AI may hallucinate and generate incorrect figures or citations. Michael Hahn’s work at the interface of machine learning and computational linguistics explains why LLMs continue to make errors despite significant advances. He has initiated a line of research that analyses the capabilities of the neural network architecture on which all popular LLMs are based – the transformer architecture. As a result, he is able to demonstrate mathematically that transformers fail on tasks in which every part of the input is relevant to the output, i.e. where changing a single character can alter the correct result. This yields theoretical insights that make it possible to better predict the strengths and weaknesses of LLMs.