Cognitive Foundations for Reasoning and Their Manifestation in LLMs
Best AI papers explained - Podcast készítő Enoch H. Kang
Kategóriák:
This research introduces a novel framework for analyzing the complexity of reasoning in Large Language Models (LLMs), defining a taxonomy of 28 cognitive elements categorized into four dimensions: **Reasoning Invariants**, **Meta-Cognitive Controls**, **Reasoning Representations**, and **Reasoning Operations**. The authors utilized this framework to analyze over 190,000 reasoning traces from 18 LLMs, revealing that models often exhibit an inverse strategy where they employ diverse behaviors least necessarily on well-structured problems. On challenging, **ill-structured problems** such as dilemmas and diagnosis tasks, models rigidly prioritize limited approaches like **sequential organization** and forward chaining, resulting in lower success rates. However, the application of **test-time reasoning guidance** tailored to successful cognitive structures dramatically improved performance on these complex tasks, confirming that LLMs possess latent reasoning capacity. The findings highlight a critical gap where current LLM research neglects abstract cognitive functions like **self-awareness** and complex structural organization, suggesting that the taxonomy is essential for shifting AI development toward more **theory-driven experimentation**.
