Cycle: XL

PhD student: Matteo Da Pelo

Role: R1 - First Stage Researcher

Supervisor: Prof. Pietro Salis – Università di Cagliari, Prof. Antonio Lieto – Università di Salerno


After many year of studies, including a bachelor in Philosophy, a Master’s Degree in Logic, Philosophy and History of Science and one in Philosophy of Artificial Intelligence and Digital World, I am now a PhD Student at UniCa. My project aims to study the Strong AI position, that is, that our cognition is entirely simulable, and show that at least its “weaker” form it’s possible, in a dynamic, embodied and emergentist cognitive approach. One of my most important focus it’s to show the epistemological power of AI models: for the better understanding of our cognition, the study of those new digital cognitive forms could be really outstanding. My research interests are of course Artificial Intelligence and its Philosophy, Philosophy of Information, Philosophy of Technologies, Philosophy of Mind, Paraconsistent Logic, Bioethics and AI Ethics.

The Turing Test (TT) was the first and most influential method proposed for evaluating artificial intelligence (AI). Formulated as a behaviorist test, it served as a crucial reference point for decades in the comparison between human and artificial intelligence. However, despite demonstrating advanced capabilities for their respective eras, systems such as the General Problem Solver (GPS), ELIZA, Semantic Networks (SN), SOAR, SHRDLU, and later DeepBlue and AlphaGo, never strictly surpassed the TT. The advent of the Transformer architecture in 2018 radically reshaped the AI landscape, and with the release of GPT-3 in 2020, it became evident that the TT could be overcome, at least in certain contexts. This shift
underscored the urgency of developing more objective evaluation methods for AI capabilities, leading to the widespread adoption of benchmarks such as GLUE and SuperGLUE, which assess performance on specific linguistic tasks. However, these benchmarks do not replicate the original intent of the TT, which was not merely a measure of performance but a comparative test between artificial and human intelligence. Comparing the capabilities of Large Language Models (LLMs) to human cognition remains methodologically problematic, as these systems are not designed to simulate human cognition but rather to optimize large-scale natural language processing. As early as the 1960s, Marvin Minsky distinguished between two main approaches in AI: "Machine-Oriented" programs, focused on computational problem-solving, and "Human-Centered" programs, aimed at modeling human cognition. LLMs clearly belong to the former category, raising questions about their suitability as tools for cognitive research. In 2021, Antonio Lieto proposed the Minimal Cognitive Grid (MCG), a framework for assessing the degree of "cognitivity" in AI systems, offering a more rigorous criterion for distinguishing cognitive science-based models from purely statistical ones. My research focuses on the critical analysis of AI evaluation methods, from the TT to modern benchmarks, with particular attention to the distinction between problem-solving AI and AI designed with cognitive principles. A central objective of this project is to assess the feasibility of developing more appropriate evaluation tools to differentiate between various types of intelligent systems. In the long term, this project aims to demonstrate how cognitive AI can not only enhance artificial systems but also contribute to our understanding of human cognition itself.

Matteo Da Pelo

Questionnaire and social

Share on:
Impostazioni cookie