The Human-Centered AI Pedagogical Engagement (HCAI-PE) Framework: A Foundational Paradigm Shift in AI Pedagogy
DOI: 10.54647/education880639 13 Downloads 240 Views
Author(s)
Abstract
The rapid adoption of artificial intelligence (AI) in education, driven by the need for efficiency and personalization, risks leading to "learnification"—the reduction of education to a technical process disconnected from ethical and relational goals. This paper introduces the Human-Centered AI Pedagogical Engagement (HCAI-PE) Framework as a theoretical approach to this problem, viewing AI not as a replacement for human educators or learners, but as a mediator for cognitive, emotional, and ethical involvement.
HCAI-PE is based on a synthesis of three core theories: Self-Determination Theory (SDT), which ensures AI supports psychological needs for autonomy, competence, and relatedness; Heutagogy, positioning AI as a partner in metacognition to promote a self- and non-neutral mediational tool that scaffolds learning within the Zone of Proximal Development (ZPD).
The framework functions within a dynamic triadic ecosystem composed of a teacher, learner, and AI, defining AI's role across three mediation areas: Cognitive (enhancing thinking and creativity), Emotional (offering emotional support and motivation), and Ethical (prompting reflection on bias and fairness). The model fundamentally redefines the teacher's role as the "ethical interpreter" and guide. Ultimately, HCAI-PE offers a blueprint for balanced engagement, ensuring AI systems are designed to support, not replace, human agency, reflection, and the core ethical values of teaching and learning.
Keywords
AI in Education, Human-Centered AI, Pedagogical Engagement, Self-Determination Theory (SDT), Heutagogy, Ethical Mediation, Learner Agency
Cite this paper
Stavissky Yuliya,
The Human-Centered AI Pedagogical Engagement (HCAI-PE) Framework: A Foundational Paradigm Shift in AI Pedagogy
, SCIREA Journal of Education.
Volume 10, Issue 6, December 2025 | PP. 344-365.
10.54647/education880639
References
| [ 1 ] | Bryson, J. J. (2022). The past decade and future of AI’s ethical challenges. AI & Society, 37, 1–15. https://link.springer.com/article/10.1007/s00146-022-01430-x |
| [ 2 ] | Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. https://www.tandfonline.com/doi/abs/10.1207/S15327965PLI1104_01 |
| [ 3 ] | D’Mello, S., & Graesser, A. (2024). Integrating artificial intelligence to assess emotions in educational settings. Frontiers in Psychology, 15, 1387089. https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1387089/full |
| [ 4 ] | Fredricks, J. A., Wang, M.-T., Schall Linn, J., Hofkens, T. L., & Noonan, P. M. (2019). Using mixed-methods approaches to assess student engagement. Educational Psychologist, 54(3), 173–191. https://doi.org/10.1080/00461520.2019.1623031 |
| [ 5 ] | Hase, S., & Kenyon, C. (2000). From andragogy to heutagogy. Ultibase Articles. https://www.researchgate.net/publication/301339522_From_andragogy_to_he utagogy |
| [ 6 ] | Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. https://curriculumredesign.org/wp-content/uploads/AIED-Book- Excerpt-CCR.pdf |
| [ 7 ] | Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Buckingham Shum, S., & Koedinger, K. (2021). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education,31, 485–509. https://link.springer.com/article/10.1007/s40593-021-00239-1 |
| [ 8 ] | Noddings, N. (2013). Caring: A relational approach to ethics and moral education (2nd ed.). University of California Press. |
| [ 9 ] | Porayska-Pomsta, K. (2024). The ethics of artificial intelligence in education. arXiv preprint. https://arxiv.org/pdf/2406.11842.pdf |
| [ 10 ] | Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://pubmed.ncbi.nlm.nih.gov/11392867/ |
| [ 11 ] | Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe &trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504. https://arxiv.org/pdf/2002.04087.pdf |
| [ 12 ] | Vistorte, A. O. R., Cogo-Moreira, H., & D’Mello, S. (2024). Artificial intelligence and emotion assessment in learning contexts: A systematic review. Frontiers in Psychology, 15, 1423158. https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1423158/full |
| [ 13 ] | Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. https://home.fau.edu/musgrove/web/vygotsky1978.pdf |
| [ 14 ] | Woolf, B., et al. (2022). Introduction to IJAIED special issue—FATE in AI in education. International Journal of Artificial Intelligence in Education, 32, 1–15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9360687/ |
| [ 15 ] | Yuvaraj, R. (2025). Affective computing for learning in education: Technologies, methods, and applications. Education Sciences, 15(1), 65. https://www.mdpi.com/2227-7102/15/1/65 |