Workshop: AI & Understanding: Between Science, Logic, and Cognition

Date: 25.02.26

Place: TU Dortmund University, EF 50, 0.442

Poster

Register here

AI & Understanding: Between Science, Logic, and Cognition

This workshop examines the concept of understanding in artificial intelligence from an interdisciplinary perspective, situated at the intersection of computer science, formal logic, and cognitive science. It aims to critically analyze the theoretical foundations and conceptual assumptions underlying contemporary AI systems with respect to understanding and cognition. By bringing together scientific, logical, and cognitive perspectives, the workshop provides a forum for rigorous discussion on the possibilities and limitations of understanding in the context of AI, as well as its broader epistemological implications.

Speakers 

Gabriele Kern-Isberner (TU Dortmund): Conditionals for Cognitive Logics in AI 

Classical logics like propositional or predicate logic have been considered as the gold standard for rational human reasoning, and hence as a solid, desirable norm on which all human knowledge and decision making should be based, ideally. For instance, Boolean logic was set up as kind of an algebraic framework that should help make rational reasoning computable in an objective way, similar to the arithmetics of numbers. Computer scientists adopted this view to (literally) implement objective knowledge and rational deduction, in particular for AI applications. Psychologists have used classical logics as norms to assess the rationality of human commonsense reasoning. However, both disciplines could not ignore the severe limitations of classical logics, e.g., computational complexity and undecidedness, failures of logic-based AI systems in practice, and lots of logical paradoxes and failures observed in psychological experiments.

Many of these problems are caused by the inability of classical logics to deal with uncertainty in an adequate way. Both computer science/AI and psychologiy have used probabilities as a way out of this dilemma, hoping that numbers and the Kolmogoroff axioms can do a better job (somehow). However, psychologists have been observing also lots of paradoxes here (maybe even more). This raises the question whether humans are hopelessly irrational and computer-based information processing based on classical approaches should be considered superior to human reasoning in terms of objectivity and rationality, or whether classical logic and probability theory are actually the right norms for assessing rationality? Cognitive logics take the second position. They question the appropriateness of classical logic and/or probability theory by proposing alternative, logic-based approaches that aim at overcoming the limitations of those classical approaches. In particular, they show how observed paradoxes in benchmark examples can be resolved.

Donal Khosrowi (Hannover University): What Kind of Thing is AlphaFold and What Does it Do?

The use of large, deep learning-based ML systems like AlphaFold is gaining traction across the sciences, promising to significantly accelerate and augment scientific discovery. Philosophers of science have focused on a range of urgent questions regarding their emerging roles, such as how the opacity of ML systems presents obstacles to important scientific goals like explanation and understanding.

This paper broadens the scope of ongoing debates by examining how emerging roles of ML systems put pressure on fundamental concepts we use to understand and organize scientific practice. Using AlphaFold as an example, I focus on two basic questions that, surprisingly, we don’t have compelling answers for: First, what kind of thing is AlphaFold? Second, what kinds of things does AlphaFold produce?

The first question has a widespread and deceptively simple answer: AlphaFold (AF) is a model — a trained, parameterized function that approximates complex protein sequence-structure relationships. But this simple view is unsatisfactory: what does AF model, exactly? How, if at all, does it represent target phenomena? Can it serve functions traditionally associated with scientific models, e.g. explanation? And so on. While these issues currently capture significant attention in the philosophy of science community, these debates have also taken for granted, in different ways, the simple answer that ML systems like AF are models.

This is potentially at odds, however, with other views arguing that ML systems like AF can be understood as part of discovering collectives, exhibiting relevant forms of epistemic autonomy in attending to, exploiting, and encoding information from data to yield epistemic achievements that are not reducible to contributions made by human engineers and investigators alone. On such a view, AF can hence be understood to play the role of a co-discoverer. What is striking about these two views is that the concepts ‘model’ and ‘discoverer’, in the philosophy of science thus far, have never been overlapping concepts. This situation is not only surprising but also unsatisfactory. We should be able to tell clearly what kind of thing ML systems like AF are without encountering significant conceptual uncertainty.

The second question — what does AF produce? — presents similar difficulties. The simple view treats AF outputs as predictions or hypotheses that require empirical validation. However, recent arguments suggest that ML systems like AF can produce synthetic evidence, which provides genuinely new knowledge about the world that investigators may accept and build on for downstream tasks, such as drug discovery. Again, this view is at odds with the simple view that AF outputs are simply predictions or hypotheses.

Against the background of these tensions, I suggest the need for a new concept to better capture ML systems like AF: autonomous model systems (AMS). AMS combine features of epistemic agents and scientific models by virtue of exhibiting considerable epistemic autonomy in interacting with data and building novel representations that are useful for prediction or explanation. It is this autonomy, I argue, that allows AMS to provide genuinely new evidence to human investigators and, under suitable conditions, facilitate other goals like scientific understanding.

Carlos Zednik (Eindhoven University)

Nina Poth (Radboud University): Misplaced Concreteness in the Cognitive Interpretation of Large Language Models

Artificial neural networks (ANNs), and large language models (LLMs) in particular, have been appreciated in cognitive science not only for their ability to predict neural and behavioural data, but also because one can perform controlled interventions on them in the service of mechanistic explanation (Milliére & Buckner 2024). These utilities admitted, it remains difficult to say in which sense such models can help understand cognitive capacities, such as learning, representing, and reasoning. I argue that this difficulty arises to a great part from the underappreciated “fallacy of misplaced concreteness” (Chirimuuta 2024, p. 246), which is exemplified by recent claims that LLMs such as Othello-GPT learn world models (see, for example, Goldstein & Levinstein 2024; critical discussion in Mitchell 2025). While such abstract generalisations can under certain conditions be helpful to track cognitive phenomena, their current use in practice prematurely reifies them as discoveries about cognition in a literal sense (see also van Rooij et al. 2024). As a route to avoiding the fallacy, I propose viewing ANNs as artefacts that motivate rethinking or explicating what cognitive capacities are, as opposed to taking observations of ANNs’ activities as confirming their presence.

References

Chirimuuta, M. (2024). The brain abstracted: Simplification in the history and philosophy of neuroscience. MIT Press. Millière, R., & Buckner, C. (2024). A philosophical introduction to language models-part ii: The way forward. arXiv preprint arXiv:2405.03207.

Mitchell, M. (2025). LLMs and World Models, Part 2: Evidence For (and Against) Emergent World Models in LLMs. Retrieved from: https://aiguide.substack.com/p/llms-and-world-models-part-2, 22/01/2026, 22:04.

Van Rooij, I., Guest, O., Adolfi, F., de Haan, R., Kolokolova, A., & Rich, P. (2024). Reclaiming AI as a theoretical tool for cognitive science. Computational Brain & Behavior, 7(4), 616-636.

Katharina Morik (TU Dortmund): An Epistemological View of Machine Learning in Physics

AI has always been developed in the tension between computer science methods and those of another discipline. In commercial applications, the goal of introducing AI is an increase in performance and making money. Hence, the quality of AI is measured in the return of investment. If AI is a means of another field’s scientific research, the quality is measured in the increase of knowledge in that field. AI is then a method of research and it needs to be theoretically well-based in its own theory and that of statistics and computing. Otherwise, the results would not be valid.
In the talk, I’ll try to illustrate the importance of the machine learning
theory for epistemologically valid use of machine learning. What is a
theory? How is it represented? How is it learned from data?
First, logic-based machine learning is introduced though an example of
AI for cognitive science. The learning knowledge acquisition and revision
system MOBAL is mentioned. It has been designed as an assistance to
human modeling. Inductive Logic Programming is one field in Machine
Learning and there are proofs of its properties. One will be sketched for
illustration.
Second, the overall scientific cycle is illustrated by an example of Ice
Cube data analysis using the RapidMiner System which covers Extract,
Transform, Load (ETL), a Machine Learning Toolbox, and Validation
methods. Each method used must be validated in order to achieve a
valid conclusion.
Third, the importance of probabilities in the science process is stressed. I’ll conclude with Feyerabend’s “Anything goes” in order to cope with the tremendous success of methods that still lacks applicable proofs of its properties.

Annika Schuster (TU Dortmund): Illusions of Objectivity: Lessons from Shapley values in XAI

SHapley Additive exPlanations (SHAP) is one of the most popular techniques from eXplainable Artificial Intelligence (XAI) research. It promises to be an answer to the blackbox problem: the fact that, while making highly accurate predictions, deep learning (DL) systems are highly opaque in that the reasons for their predictions remain inaccessible. Adopted from Shapley values in cooperative game theory, SHAP seems to have a solid theoretical foundation and is said to offer a unique solution that fulfils desirable properties. This is where illusions of objectivity arise.

In this article, I show that there are good reasons to question both the uniqueness and desirability of the properties of SHAP explanations in the context of DL and propose, based on a case study from its uses in computational biology, criteria that successful explanations with XAI tools should fulfil.

Schedule

25.02.26 Speakers
09:00 Arrival
10:00 Introduction
10:15-11:00 Gabriele Kern-Isberner, TU Dortmund
11:15–12:00 Donal Khosrowi, Hannover University
12:00-13:00 Lunch
13:00–13:45 Carlos Zednik, Eindhoven University
14:00–14:45 Nina Poth, Radboud University
14:45–15:15 Coffee break
15:15–16:00 Katharina Morik, TU Dortmund
16:15–17:00 Annika Schuster, TU Dortmund

For inquiries, please contact the organizing committee at udnn.ht@tu-dortmund.de

Main Organizers: Annika Schuster, Frauke Stoll, and Florian J. Boge