AI, Education, and Understanding
When to Drive and When to Walk
This paper articulates a theoretical approach to the question of which aspects of higher education should incorporate AI large language models (LLMs) and which should not, using ideas from recent work in the epistemology of understanding. It exploits an extended analogy between walking and driving, using it to reject two extreme positions: the technophobic position (walking is always better and one should never drive; LLMs have no place in higher ed) and the technophilic position (driving is always better and we no longer need to practice walking; we should completely reorient higher ed by incorporating AI as much as possible). It also uses the driving and walking analogy to caution that changes to our epistemic practices in light of AI must take account of their embeddedness in broader educational infrastructures—especially limitations imposed by administrators. While LLMs may have changed our epistemic practices aimed at knowledge, and thus even changed what we take knowledge to be, they have not and cannot effect a parallel change in understanding. Focusing on understanding rather than knowledge can help us avoid the rush to problematic, short-term solutions, and instead find a thoughtful middle ground between technophobia and technophilia. This will involve a partial reorientation away from the focus on content and the mastery of factual information, and toward a focus on skills of understanding. It gives an account of understanding as a grasping of nonpropositional structure, and shows how this is of special relevance for the situational, contextual, and analogical thinking higher education ought to promote. Finally, it homes in on one particular skill of understanding: questioning.
Executive Impact: Reshaping Higher Education with AI
The philosophical insights presented offer a crucial framework for leaders in higher education to navigate AI integration effectively, fostering genuine understanding and critical human skills.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Understanding as Non-Propositional Grasping
The paper centers on distinguishing understanding from knowledge, advocating for the former as the primary epistemic good in higher education. Drawing on philosophers like Catherine Elgin and Linda Zagzebski, understanding is described as the grasp of non-propositional structure, rather than merely acquiring justified true beliefs. This holistic and relational quality allows for a deeper connection to reality, encompassing situational, contextual, and analogical thinking that LLMs, as purely linguistic systems, cannot replicate.
Navigating AI's Capabilities and Limitations
Utilizing an extended analogy of walking versus driving, the paper rejects both outright technophobia (AI has no place) and extreme technophilia (AI should completely reorient education). While LLMs can significantly augment epistemic practices aimed at knowledge acquisition (information retrieval, propositional inferences), they are fundamentally limited in fostering genuine understanding due to their lack of direct engagement with non-linguistic structures and embodied experience. The key is to identify when AI is genuinely needed ("driving") and when human skills ("walking") must be cultivated.
Shifting Focus to Skills and Infrastructure
The paper argues for a reorientation away from content mastery towards developing skills of understanding. This includes fostering analogical thinking, moral and pragmatic reasoning, and critically, the skill of questioning—which goes beyond mere prompt engineering. Crucially, successful AI integration requires addressing broader epistemic infrastructure, such as reducing class sizes, faculty workload, providing adequate time for assignments, and encouraging discussion-based, creativity-focused learning environments, moving away from "purpose-drift."
Rebalancing Epistemic Practices: The Walking vs. Driving Analogy
| Feature | Knowledge (LLM Augmented) | Understanding (Human Centered) |
|---|---|---|
| Primary Goal |
|
|
| LLM Role |
|
|
| Nature |
|
|
| Pedagogical Focus |
|
|
The paper advocates for a significant shift in higher education pedagogy, moving a substantial portion of focus from mere content mastery (which AI excels at) to the cultivation of understanding as a set of critical human skills. This includes fostering advanced forms of questioning, analogical reasoning, and situated contextual analysis, tasks beyond the current capabilities of LLMs.
Case Study: Reimagining Epistemic Infrastructure for the AI Era
The paper highlights that effective AI integration requires confronting existing "epistemic infrastructure" limitations in higher education. Current challenges include:
- Over-reliance on AI for basic tasks due to unsustainable faculty workloads.
- Large class sizes hindering individualized feedback and deep learning.
- Insufficient time for students to engage in critical, risk-taking intellectual work.
Proposed Solutions:
- Reduce class sizes and faculty teaching loads.
- Provide adequate time for students to complete complex assignments.
- Promote discussion-based classrooms over lecture-heavy models.
- Reward creativity and risk-taking over mere content regurgitation.
- Support full-time faculty to approach teaching as a profession, not a gig.
Impact: By addressing these systemic issues, institutions can create an environment where human skills of understanding, judgment, and questioning are genuinely cultivated, rather than passively outsourced to AI, ensuring a future-ready educational model.
Calculate Your Potential AI Impact
Estimate the efficiency gains and cost savings by strategically integrating AI, focusing on areas where it augments rather than replaces core human understanding.
Your AI Implementation Roadmap
A phased approach to integrating AI effectively while preserving and enhancing human-centered education. This roadmap aligns with the paper's call for a thoughtful, infrastructure-aware strategy.
Phase 1: Epistemological Audit & Strategy Definition
Conduct a deep dive into current pedagogical practices and identify areas where AI can augment knowledge acquisition without compromising the cultivation of understanding. Define institutional goals for AI integration based on a clear understanding of its philosophical implications.
Phase 2: Infrastructure Assessment & Adjustment
Evaluate existing educational infrastructure (class sizes, faculty support, student time allocation) against the demands of fostering understanding. Implement changes to support more personalized learning, discussion-based classrooms, and opportunities for creative, risk-taking intellectual work.
Phase 3: Curriculum Redesign & Skill Development
Reorient curriculum to emphasize skills of understanding, such as advanced questioning, analogical reasoning, and situational thinking, over rote content mastery. Develop faculty training programs focused on teaching these skills and effectively leveraging AI as a supportive tool.
Phase 4: Pilot Programs & Iterative Implementation
Launch targeted pilot programs to test new pedagogical approaches and AI tools in controlled environments. Gather feedback, measure impact on student understanding, and iteratively refine strategies based on real-world results, ensuring alignment with human-centered educational values.
Ready to Navigate the Future of AI in Education?
Unlock the full potential of AI by focusing on understanding, not just knowledge. Let's design a strategy that empowers your institution and students for tomorrow's challenges.