Abstract
As generative AI tools become increasingly integrated into educational practice, its use among pre-service teachers is often accompanied by hesitation and discomfort. This chapter examines the phenomenon of AI shaming among teacher education students—the stigma and reluctance to disclose AI tool use due to perceived threats to academic authenticity. Drawing on classroom insights and student reflections, it explores how social norms, institutional pressures, and identity formation shape this behavior. These experiences reveal the deep tension between embracing technological innovation and maintaining traditional standards of academic merit. The chapter highlights the implications for digital literacy, professional development, and ethical technology integration. It calls for a shift in narrative, framing AI not as a shortcut but as a tool for innovation. Actionable strategies for educators and institutions are proposed to foster open, reflective, and supportive environments for responsible AI use in teacher education.
Keywords: AI Shaming, Generative AI, Teacher Education, Academic Authenticity, Digital Literacy, Identity Formation, Ethical Technology Use, Professional Development
Introduction
In recent years, the rapid development of generative AI tools like ChatGPT, DALL·E, and Bard has begun reshaping various sectors, including education (Bozkurt et al., 2024; Zawacki-Richter et al., 2019). These tools, powered by advanced machine learning algorithms, can generate coherent text, realistic images, and creative content based on user input, making them invaluable for tasks ranging from ideation to conceptualization (Adetayo, 2024; da Silva & Ulbricht, 2024; Kamalov et al., 2023). For educators and students alike, generative AI offers new ways to approach problem-solving, streamline administrative tasks, and enhance creativity in learning environments (Garcia et al., 2024; Wang et al., 2024). In the context of teacher education, where future educators are expected to develop both pedagogical adaptability and digital competence, the integration of AI tools represents a transformative yet controversial shift.
Despite the potential benefits of these AI tools, their adoption is not without controversy. Among students—particularly those preparing for teaching professions—there is an emerging socio-academic phenomenon known as AI shaming. This concept refers to the reluctance or embarrassment students feel when admitting to using AI tools, stemming from fears that their reliance on such technology may be perceived as undermining their creativity, critical thinking, or academic integrity (Acut et al., 2024; Garcia, 2024; Giray, 2024). As highlighted in recent studies, AI shaming reflects internalized stigma around perceived "inauthentic" digital labor, and is particularly prevalent among students who feel pressure to conform to traditional notions of academic rigor and authorship (E. & A., 2024). In many cases, students may view the use of AI as a form of intellectual shortcutting, leading to a sense of guilt or shame—even when AI tools are used to support legitimate learning objectives (Zhai et al., 2024).
Such stigma surrounding AI use has deeper implications beyond personal discomfort; it can shape how future educators relate to emerging technologies in their professional lives. AI shaming may deter students from fully exploring the capabilities of these tools, limiting their potential to develop essential digital literacy and technological fluency—skills that are increasingly critical for success in the 21st-century workforce (Gantalao et al., 2025; Walter, 2024). This concern is particularly urgent in teacher education programs, where students are not only learning content but also internalizing beliefs and practices they will later model in classrooms. If teacher candidates feel shame or fear regarding AI use, they may resist integrating these tools into their future pedagogical practices—thereby perpetuating outdated norms around technology avoidance or resistance (Akgun & Greenhow, 2021). This not only undermines their development as digitally competent educators but also sustains a cycle of discomfort and silence around technological integration that could otherwise enhance learning.
The concept of AI shaming intersects with broader discussions of digital identity, educational authenticity, and professional formation. Research suggests that students’ attitudes toward technology are heavily influenced by their perceptions of what constitutes "real" or "legitimate" work (Wu et al., 2022). When students believe that using AI diminishes their intellectual efforts, they may experience a dissonance between their self-concept as capable learners and societal expectations for originality. As highlighted by Khlaif et al., (2022), such internal conflict can have long-term effects on how teacher candidates view their role in the classroom and their comfort level with integrating emerging technologies.
Hence, this chapter explores the lived realities, social narratives, and institutional conditions that contribute to AI shaming in teacher education. It also aims to provide a framework and practical strategies for educators and institutions to shift from a punitive culture of shame to one of responsible engagement, transparency, and digital empowerment. By addressing AI shaming head-on, we aim to shift the narrative around AI in education—from one of fear and judgment to one of critical awareness, innovation, and professional growth.
Main Focus of the Chapter
This chapter investigates the nuanced phenomenon of AI shaming in teacher education, shedding light on the complexities of integrating generative AI tools within academic and professional learning contexts (Hasanah et al., 2025). While such tools—ranging from content generators to assessment aids—hold the potential to enhance pedagogical innovation and learner autonomy, this chapter critically examines how teacher education students navigate the social stigma surrounding AI use in their academic work. Students often internalize societal and institutional expectations of originality, authenticity, and intellectual effort, which may conflict with the practical and creative affordances of AI technologies. As a result, many students conceal their AI use due to perceived threats to academic integrity, fears of being labeled as dishonest, or concerns over eroding their credibility as future educators. These tensions are particularly pronounced in programs preparing teachers to be both digitally literate and ethically grounded.
The chapter further explores broader ethical and pedagogical challenges, including risks of AI misuse, algorithmic bias, and overdependence, which may contribute to skill obsolescence and diminish core teaching competencies. It contextualizes AI shaming within the evolving discourse on academic honesty, professional identity formation, and educational equity—issues central to this volume’s themes on bias, misuse, and displacement. Methodologically, this work adopts a reflective-analytical lens (Olmos-Vega et al., 2022), integrating insights from the author’s teaching practice with a critical synthesis of emerging literature on AI in teacher education. The analysis draws from classroom interactions, narrative accounts, and student reflections collected through journaling activities, class forums, and guided discussions conducted from 2023 to 2024 in courses on Science, Technology, and Society (STS) and Educational Technology.
The narrative elements are informed by conceptual frameworks from educational sociology—notably Goffman’s stigma theory—alongside theories of identity and discourse (Gee, 2000), and research on teacher development and professional ethics. While not empirical in the strictest sense, the chapter employs an autoethnographic orientation (Acut, 2024; Chang, 2016), supported by illustrative classroom-based cases and critical interpretation. A conceptual model is proposed to map the interrelated factors contributing to AI shaming and to offer pathways toward fostering transparency, critical digital literacy, and ethical AI integration in teacher education.
AI in the Context of Teacher Education
Current Landscape of AI Integration in Teacher Education
The integration of generative AI tools into teacher education is increasingly seen as an essential step in preparing future educators for the demands of modern classrooms. These tools offer transformative opportunities for lesson planning, resource generation, personalized learning, and formative assessment (Garcia et al., 2025). Universities and teacher education programs around the globe have begun embedding AI-focused content into their curricula, either as part of educational technology courses or through dedicated modules on artificial intelligence and machine learning (Salas-Pilco et al., 2022). These efforts aim to ensure that teacher candidates are not only familiar with the latest technological advancements but also understand how to ethically and effectively use AI in their teaching practices (Ng et al., 2023).
Incorporating AI tools into teacher education programs has led to several pedagogical innovations. For instance, AI-based platforms like ChatGPT are being used to help teacher candidates draft lesson plans, generate classroom activities, and engage in reflective writing. Similarly, AI-driven learning management systems (LMS) are being employed to monitor student progress, analyze performance data, and suggest personalized interventions (Garcia et al., 2025; Khan et al., 2021). These advancements allow for a more adaptive and responsive approach to teaching and learning, helping future teachers to better meet the needs of diverse learners.
However, the integration of AI into teacher education is not without challenges. Many educators express concerns about over-reliance on AI, ethical considerations related to data privacy, and the potential for AI to perpetuate bias (Kooli, 2023). These concerns are particularly relevant in teacher education programs, where future teachers must learn to navigate the balance between leveraging technology and maintaining pedagogical integrity. As a result, teacher education curricula are increasingly focused on helping students develop critical digital literacy skills, including understanding how AI works, its limitations, and how to use it in ways that enhance—not replace—human judgment in the classroom (Sperling et al., 2024).
Student Perspectives on AI Tools
Student perspectives on AI tools in teacher education are influenced by various factors, such as their prior experiences with technology, personal beliefs about AI’s role in education, and the broader societal context in which they learn. Many teacher education students express ambivalence toward AI tools. On the one hand, they recognize the potential benefits, including automating routine tasks, enhancing creativity, and providing personalized learning experiences (Chan & Hu, 2023). On the other hand, concerns arise, such as fears of academic dishonesty, the perceived loss of the human element in teaching, and the potential for AI to erode their professional identities as educators (Al-Zahrani, 2024). This ambivalence is closely tied to the phenomenon of AI shaming—where students feel embarrassed or hesitant to admit using AI tools in their academic work. This reluctance often stems from societal perceptions of AI as a shortcut rather than a legitimate aid in the learning process. Educational environments, particularly in teacher education, value authenticity and originality, reinforcing these negative perceptions (Ajjawi et al., 2023). As a result, students fear being judged as less capable or dishonest if they openly use AI, which hinders their willingness to fully leverage its benefits (Zhai et al., 2024).
Studies have documented various instances of AI shaming. For example, one student admitted that AI had significantly improved the structure of their writing, yet they felt guilty as if bypassing the academic process (Ahmad et al., 2023). Similarly, in group projects, students who suggested using AI for brainstorming were met with skepticism, causing them to withdraw their ideas. This reflects how societal expectations about the "right" way to learn inhibit effective use of technology, even when it offers clear benefits (Haleem et al., 2022). These challenges reflect a broader tension between students' recognition of AI’s potential and their fears of how it might impact their roles as future educators. Some students voiced concerns that relying on AI for lesson planning could reduce their creativity and autonomy (van den Berg & du Plessis, 2023). Others worried that overuse might hinder critical skill development, such as problem-solving and decision-making, essential for effective teaching (Ahmad et al., 2023). These concerns highlight a need for careful consideration of AI's role in teacher education (Lameras & Arnab, 2021). Critically, AI shaming creates barriers to productive engagement with AI tools in teacher education, suggesting a need for systemic change in how AI is perceived and discussed within academic settings. Educators play a critical role in shaping these perceptions by modeling responsible AI use (Adel et al., 2024). By reducing stigma and promoting positive engagement, they can help students integrate AI meaningfully into their learning (Bulathwela et al., 2024).
Experiences of AI Shaming
The phenomenon of AI shaming—where individuals feel embarrassed or hesitant to admit their use of AI tools—has become increasingly evident in educational contexts. This reluctance often stems from societal perceptions of AI as a shortcut rather than a legitimate learning aid. AI shaming is particularly prominent in educational settings, where the value of authenticity and originality is held in high regard, often leading to negative judgments about the use of AI.
Instances and Anecdotes
Across multiple classroom settings, the authors observed strikingly similar patterns of student hesitation and discomfort around the use of generative AI tools. These anecdotes highlight a recurring phenomenon: students are engaging with AI but often choose to conceal their usage due to fears of academic judgment or perceived lack of integrity.
Narrative: "Ana’s Dilemma"
Ana, a third-year education major, used ChatGPT to brainstorm ideas for a microteaching lesson plan. When she shared her draft with a peer during practicum prep week, the response was dismissive: "Did you actually write this yourself or just copy-paste from AI?" Ana laughed it off, but the moment left her uneasy. In later discussions, she avoided mentioning any AI use, despite its continued support in refining her instructional strategies. Later, Ana’s mentor teacher cautioned her: "Some professors frown on using AI. It can make your work look fake." Though meant as advice, the comment reinforced her anxiety. Ana began rewriting AI-assisted outputs just to hide their origin—losing valuable time and feeling increasingly disconnected from her own creative process. This narrative, based on recurring classroom patterns, underscores the internal conflict many teacher education students face: a tension between leveraging digital tools for efficiency and upholding perceived standards of "authentic" academic labor.
Classroom Reflections
In Dharel’s STS class, a similar dynamic emerged. When he asked students whether they had used tools like ChatGPT for coursework, no hands were raised. Only after he shared his own experiences with AI for ideation did the atmosphere shift. Students gradually opened up, admitting to using AI for brainstorming, outlining, and revising. Their initial silence highlighted the social risks they associated with AI use—fears of being labeled lazy, dishonest, or less competent. One student shared how they relied on AI to help structure essays but never acknowledged it in class, fearing criticism. Another noted that although AI enhanced their clarity in writing, they felt guilty, as if they had bypassed the academic process entirely.
In Eliza’s class, a student recounted suggesting an AI tool for brainstorming during a group project—only to have the idea dismissed as undermining their group’s creativity. Another student, while using AI to summarize lessons for exam prep, chose not to disclose it to peers, worried they’d be viewed as incapable of studying independently.
Likewise, in Anabelle’s classroom, a pre-service teacher admitted to using AI to draft lesson plans for her teaching practicum. Despite the resource helping her produce creative outputs, she deliberately downplayed its role during peer feedback and supervisor evaluations, fearing that openness would raise doubts about her pedagogical skills and professional readiness.
Shared Observations Across Institutions
Though these anecdotes stem from different courses and institutions, they reveal a consistent pattern: students often hide their use of AI due to internalized shame, social judgment, and institutional ambiguity. Co-authors from different educational contexts noted this trend, reinforcing the idea that AI shaming is not isolated but systemic.
These accounts, as depicted in Figure 1, demonstrate how AI use, when clouded by stigma, becomes a source of anxiety rather than empowerment. They reveal how students navigate a hidden curriculum—one where values like originality, authenticity, and individual effort are rigidly upheld, often at the expense of innovation. The implications for teacher education are profound: if future educators are to model responsible and creative use of AI in their own classrooms, they must first feel safe to explore these tools without fear of judgment.
Factors Contributing to AI shaming
While several studies have pointed to traditional academic expectations and institutional ambiguity as reasons for the resistance toward AI tools in education (Birks & Clare, 2023; Chen et al., 2022; Kim et al., 2024), this study offers a novel framework synthesizing classroom narratives, reflective observation, and literature. We propose a conceptual model—The AI Shaming Framework in Teacher Education—comprising four interrelated dimensions: Internal Conflict, Peer Judgment, Educator Influence, and Policy Silence. These elements explain how AI shaming emerges and persists in pre-service teacher education contexts.
Internal Conflict: Navigating Values vs. Efficiency
At the heart of AI shaming lies an internal tension experienced by students who recognize the utility of AI tools but simultaneously feel guilt or shame when using them. This stems from the entrenched academic valorization of originality and self-reliance, which students internalize throughout their schooling (Birks & Clare, 2023; Chen et al., 2022). Ana’s composite narrative, as well as firsthand classroom observations, underscore how students feel they are compromising their academic integrity—even when AI supports learning rather than replaces effort. As a result, they either over-edit AI-generated content to remove traces of machine assistance or avoid disclosing its use altogether, leading to time-consuming processes and creative disconnection.
Peer Judgment: Social Stigma and Image Management
Students do not operate in isolation. The narratives from Dharel, Eliza, and Anabelle’s classrooms show how peer interactions significantly shape perceptions of acceptable academic behavior. Comments like "Did you just copy-paste from AI?" or the outright rejection of AI in group settings reflect a performative academic culture—one in which being seen as independent and intellectually competent takes precedence. Peer influence generates fear of being perceived as lazy, deceptive, or less capable (Zhang et al., 2023), which pressures students into secrecy and self-censorship, especially in competitive learning environments.
Educator Influence: Authority Figures as Norm Enforcers
Educators play a pivotal role in shaping attitudes toward AI use. When mentors issue vague warnings (e.g., "AI makes your work look fake"), they may inadvertently contribute to student anxieties. As seen in Ana’s story, these comments instill a sense of surveillance and moral suspicion, leading students to equate AI-assisted work with academic misconduct. In many institutions, the lack of pedagogical modeling around AI use means students do not see responsible AI use being demonstrated or encouraged. This scenario often resulted in a vacuum filled with doubt and fear (Bozkurt et al., 2024; Coman & Cardon, 2024).
Policy Silence: Ambiguity and Mistrust
Compounding the above factors is the absence of clear institutional policies or instructional scaffolds around AI usage. Many universities are still grappling with how to integrate generative AI into academic integrity policies, leaving students confused about what is acceptable (Gustilo et al., 2024; Kim et al., 2024). This policy vacuum results in students defaulting to risk-averse behaviors—avoiding or hiding AI use altogether. In contexts like teacher education, where students are expected to uphold professional and ethical standards, this silence contributes to AI being viewed with suspicion rather than curiosity.
Conceptualizing AI shaming through this framework, as illustrated in Figure 2, allows teacher educators to move beyond reiterating known barriers to digital tool adoption. It introduces a structured lens to critically examine the layered dynamics—internal conflict, peer judgment, educator influence, and policy silence—that perpetuate stigma around AI use. This perspective not only acknowledges the emotional and social risks students face but also equips educators with a foundation to design pedagogical interventions that validate responsible AI use, foster open dialogue, and promote ethical integration in both academic and field-based contexts
The Role of Educators
Educators play a pivotal role in shaping students' behaviors toward the use of AI. Their perceptions and approaches toward AI not only influence students' acceptance and usage of these tools but also contribute significantly to the broader culture around technology in education.
Educator Attitudes Toward AI
The attitudes that educators hold toward AI tools profoundly impact how students perceive and use these technologies in their academic work. If educators demonstrate openness and curiosity toward AI, students are more likely to feel comfortable exploring and integrating AI into their learning processes. This positive reinforcement can lead to innovative approaches in their assignments and projects. On the other hand, when educators express skepticism or dismiss the use of AI as a threat to academic integrity or authenticity, students may feel pressured to hide their AI use due to fear of judgment or negative consequences (Zhai et al., 2024). This dynamic can create a divide between students who utilize AI to enhance their learning and those who refrain from using it out of fear, leading to feelings of inadequacy or isolation among the latter.
Research shows that educator attitudes toward AI can range from enthusiasm and support to reluctance and resistance (Chan & Lee, 2023; Kaya et al., 2022). Teachers who view AI as a valuable tool for enhancing creativity and critical thinking can create an environment where students feel encouraged to experiment with these technologies (Al Darayseh, 2023). This supportive atmosphere allows students to take risks and explore AI's capabilities, ultimately fostering a sense of ownership over their learning. However, educators who focus solely on the potential pitfalls—such as concerns about AI replacing human jobs or reducing the need for traditional skills—may inadvertently reinforce the stigma around AI, contributing to AI shaming among students. Such negative perceptions can stifle students' willingness to engage with AI, potentially limiting their skill development in an increasingly digital world.
In many cases, the reluctance among educators to fully embrace AI stems from a lack of familiarity with these tools and how they can be meaningfully integrated into their curricula. Some educators worry that AI will compromise the integrity of learning by providing easy shortcuts, while others are unsure of how to assess work that has been AI-assisted (Lee et al., 2024). This lack of understanding can lead to missed opportunities for educators to utilize AI as a complementary resource in their teaching. As a result, students often sense this ambivalence and mirror it in their own attitudes toward AI, feeling uncertain about its role in their education and hesitant to use it effectively. Furthermore, the absence of institutional policies and clear ethical guidelines on AI use in education exacerbates these tensions (Nguyen et al., 2022). When educators themselves are uncertain about how to approach AI, they may pass this uncertainty onto their students, leading to inconsistent messaging about the legitimacy of AI in the classroom. Such inconsistency can foster a culture of mistrust regarding AI tools, preventing students from recognizing their potential benefits. In the absence of well-defined frameworks for AI usage, both educators and students are left grappling with how to navigate the evolving landscape of technology in education, further complicating their relationship with AI.
Educator Attitude | Description | Impact on Students |
---|---|---|
Openness and Curiosity | Educators with this attitude actively explore AI tools and encourage critical engagement, fostering a supportive environment where students feel safe to experiment and discuss AI use. This openness reduces stigma and promotes responsible digital literacy. | Students feel empowered to experiment with AI technologies. |
Skepticism and Dismissiveness | Skeptical educators often emphasize AI’s risks to academic integrity, which can create fear-driven classrooms where students hide AI use or associate it with cheating. Such attitudes may reinforce AI shaming and hinder authentic technology engagement. | Students conceal their AI use due to fear of judgment. |
Enthusiasm for Innovation | Teachers who embrace AI as a means of pedagogical innovation integrate it into lessons and inspire students to explore its creative and analytical potential. This enthusiasm cultivates student confidence and encourages ethical AI use. | Students engage more confidently with AI, seeing it as legitimate. |
Reluctance Due to Uncertainty | Reluctant educators often lack training or clarity on AI's role in teaching, leading to inconsistent practices and mixed signals for students. This uncertainty fosters confusion, making students hesitant or unsure about legitimate AI integration. | Students mirror this uncertainty, leading to apprehension in using AI. |
Institutionally Reactive | Educators adjust their AI stance based on the presence or absence of formal guidelines. In environments lacking clear policies, they may default to conservative or avoidance-based approaches. | Students experience inconsistent or overly cautious messaging about AI, leading to confusion or fear of inappropriate use. |
The attitudes of educators toward AI play a crucial role in shaping students' perceptions and willingness to engage with these technologies. Fostering a culture of openness and curiosity can significantly reduce the stigma surrounding AI use, encouraging students to embrace these tools as legitimate aids in their learning journey. Moreover, implementing clear institutional policies and providing professional development on AI integration will equip educators with the necessary skills and confidence to guide their students effectively. Ultimately, creating a supportive and informed environment will empower future educators to utilize AI responsibly, enhancing their educational experiences and professional growth.
Actionable Strategies for Addressing AI Shaming
AI shaming—a phenomenon where students feel embarrassed or are judged for using AI tools—can hinder open discussions, ethical use, and digital empowerment in teacher education (Giray, 2024). To responsibly and actively promote a supportive, informed, and inclusive culture of AI use, the following six structured and actionable strategies are proposed:
1. Implement AI Literacy Workshops
Objective: To equip both pre-service teachers and educators with the foundational skills and ethical understanding necessary to use AI tools confidently and responsibly.
Description: AI literacy workshops provide structured, hands-on sessions where participants learn how to interact with AI tools, reflect on their outputs, and evaluate ethical implications.
Workshop Template:
- Session 1: Introduction to Generative AI – Understanding how tools like ChatGPT, DALL·E, and Copilot function.
- Session 2: Use Cases in Teaching and Learning – Applications in lesson planning, brainstorming, feedback generation.
- Session 3: Ethical Use and Academic Integrity – Discussing plagiarism, authorship, and transparency.
- Session 4: Practicum Task and Reflection – Engaging students with tasks that integrate AI and require critical analysis.
Example: At the University of Arizona, (2025), faculty and students engaged in the "Transforming Teaching with AI: Integrating GPT and LLMs" workshop. This hands-on session provided a step-by-step guide to licensing, configuring, and integrating large language models (LLMs) such as ChatGPT, Gemini, and Copilot into classroom practice—at a cost-effective rate (under $50, with free student access). Participants explored the pedagogical affordances of different LLMs and gained practical skills in AI-enhanced lesson planning, feedback generation, and instructional design. Initial feedback indicated increased confidence, reduced barriers to adoption, and improved student engagement through more personalized and interactive learning experiences.
2. Integrate an AI Ethics Module into the Curriculum
Objective: To provide a formal space for students to explore philosophical, social, and professional issues surrounding AI use in education.
Description: By embedding AI ethics into education courses (e.g., Educational Technology, Foundations of Education), teacher candidates gain a deeper understanding of responsible AI use.
Sample Module Outline:
- Week 1: Introduction to AI Ethics in Education (autonomy, justice, fairness)
- Week 2: Plagiarism vs. Assistance – Gray areas of AI-generated content
- Week 3: Bias and Fairness in AI Tools
- Week 4: Institutional Policies and Writing AI Usage Disclaimers
Assessment: Case study analysis, ethical dilemmas debate, AI policy drafting
Example: In the United States, secondary Computer Science and English Language Arts teachers across urban, suburban, and semi-rural school districts piloted a project-based AI ethics curriculum to contextualize the complexities of artificial intelligence for their diverse student populations (Walsh et al., 2023). Recognizing that AI ethics is an urgent yet often overlooked topic in K–12 education, these educators adapted the curriculum to align with their students’ local realities—incorporating discussions on algorithmic bias, data privacy, and the social impact of generative AI. The project featured hands-on design challenges, case study analyses, and community impact investigations, which not only deepened students’ understanding of AI technologies but also fostered critical thinking, civic awareness, and interdisciplinary learning.
3. Facilitate "AI Sharing Circles"
Objective: To normalize AI use through peer dialogue, reduce stigma, and foster a reflective, non-judgmental classroom climate.
Description: AI Sharing Circles are structured, small-group conversations where students reflect on and discuss their experiences with AI. These discussions promote emotional safety, normalize diverse AI practices, and build a shared understanding of responsible use.
Protocol Format:
- Opening Round: "Share one way you’ve used AI this week."
- Middle Round: "What challenges or concerns have you faced?"
- Closing Round: "What support do you need to use AI more effectively?"
Facilitator’s Role: Ensure psychological safety, highlight positive patterns, and connect reflections to course objectives.
Example: Prior to the 2023–2024 school year, a school district in Canada initiated a collaborative planning phase for monthly AI Sharing Circles, designed to integrate Indigenous knowledge systems with technological professional development. The Learning and Leadership Services (LLS) team, in partnership with a community Elder and the Equity team, facilitated discussions with school administrators to collect feedback and inform the design of these circles. These sessions were not standalone activities but served as complementary professional development, aligning with broader equity and instructional goals. Rooted in the Indigenous principle of Etuaptmumk (Two-Eyed Seeing), this initiative aimed to bridge Western and Indigenous ways of knowing, offering a culturally grounded approach to AI integration that fosters reflection, inclusivity, and ethical awareness in leadership and teaching practices.
4. Design Assignments that Incorporate AI Use
Objective: To reframe AI as a legitimate educational aid rather than a form of cheating or shortcut.
Description: Instructors can explicitly require or allow the use of AI tools in assignments, with follow-up reflection or comparison tasks that promote critical thinking.
Example Assignment:
- Task: Use ChatGPT to generate a draft lesson plan on ecosystems.
- Reflection: "What suggestions did you accept, reject, or modify? Why?"
- Evaluation Criteria: Clarity of AI output, alignment with curriculum, originality of final product.
- Outcome: Students learn to use AI as a partner in ideation, while developing their capacity to critique and refine AI-generated outputs.
Example: The Stanford Teaching Commons, (2024) offers a comprehensive module titled "Integrating AI into Assignments", which guides educators through the process of embedding generative AI tools into student assessment tasks. The module emphasizes designing meaningful assessments that align with clear learning objectives and respond to student perspectives on AI use. This student-centered approach fosters responsible AI integration, critical thinking, and academic integrity, positioning AI as a tool for deeper engagement rather than shortcut learning.
5. Provide Professional Development for Educators
Objective: To build confidence and competence in facilitating AI-enhanced learning experiences.
Description: Many instructors may themselves feel uncertain about AI. Institutions must offer ongoing PD workshops focused on:
- AI literacy
- AI integration into pedagogy
- Designing AI-compatible assessments
- Addressing student fears and ethical concerns
PD Model: Blended (asynchronous modules + synchronous coaching)
Example: A study conducted at Palestine Technical University Kadoorie and Hebron University evaluated the AI literacy levels of preservice teachers and tested the impact of a professional development program grounded in the Instructional Design Framework for AI Literacy. Using a quasi-experimental pretest-posttest design, the study involved 37 undergraduate participants and utilized a validated AI literacy scale for assessment. The findings revealed that the program significantly improved AI literacy skills among preservice teachers, regardless of gender or specialization. The study recommends embedding AI tools in both pre-service and in-service teacher training and expanding research to diverse disciplines to increase generalizability and effectiveness of AI-focused professional development strategies (Younis, 2024).
6. Cultivate a Positive AI Culture in the Classroom
Objective: To establish AI as a respected and creative tool through storytelling, real-world use cases, and student empowerment.
Description: Educators can share inspiring stories of AI in education, celebrate successful student use cases, and showcase how AI is used in teaching professions.
Initiatives:
- Display posters or slides showing AI-assisted lesson plans or rubrics created by students
- Invite alumni or guest speakers who integrate AI into their teaching
- Host a "Creative AI Challenge" where students showcase ethical AI-assisted projects
Example: In discussing how educators can embrace generative AI in the classroom, Harouni, a lecturer at the Harvard Graduate School of Education, urges teachers to acknowledge and critically engage with the presence of tools like ChatGPT. He suggests that educators should guide students in learning how to use AI responsibly, ask deeper questions, and use the limitations of AI to spark creative exploration. Harouni emphasizes that when AI reveals "our failure of imagination," that’s precisely when the real learning begins. He encourages the use of AI as a collaborative partner in the learning process, advocating for assignments that challenge students to rethink traditional frameworks rather than simply regurgitate information.
Strategy | Focus | Key Output |
---|---|---|
AI Literacy Workshops | Knowledge + Skills | AI use reflections, increased confidence |
AI Ethics Curriculum Module | Values + Critical Thinking | Policy drafts, ethical debate performance |
AI Sharing Circles | Emotional Safety + Dialogue | Peer support, reduced stigma |
AI-Integrated Assignments | Practice + Evaluation | Critiqued outputs, reflective comparison |
PD for Educators | Institutional Capacity | Teacher-designed AI-enhanced lessons |
Cultivating Positive AI Culture | Identity + Belonging | Vlogs, showcases, peer-celebrated best practices |
Implications for Teacher Education
The growing presence of AI in education brings both opportunities and challenges (Garcia, 2025; Xiao et al., 2025), particularly in shaping the identities and professional trajectories of future educators. AI shaming can profoundly influence the way teacher education students perceive their role in the classroom and how they develop professionally. These implications call for a fundamental rethinking of how AI is integrated into the programs, shifting the narrative from one of shame and fear to one of empowerment and innovation.
Impacts on Student Identity and Professional Development
AI shaming can have detrimental effects on the identity formation and professional development of future educators. For many teacher education students, their training is not just about acquiring pedagogical knowledge and skills but also about forming a professional identity as an educator. When AI shaming is present, students may internalize the idea that using AI is somehow "cheating" or undermining their role as an authentic source of knowledge. This internal conflict can lead to self-doubt, hesitancy in adopting new technologies, and even a diminished sense of professional competence. In the context of professional development, AI shaming can create barriers to innovation and growth. Future educators who feel stigmatized for using AI may avoid these tools altogether, missing out on opportunities to enhance their teaching practices, develop new skills, and prepare for the increasingly digital classrooms of the future (Zawacki-Richter et al., 2019). As AI tools continue to evolve, educators who are not familiar with these technologies may find themselves at a disadvantage, unable to incorporate AI into their pedagogy and meet the demands of a technology-driven educational landscape (Hennessy et al., 2022).
Moreover, AI shaming can reinforce traditional, hierarchical notions of teacher authority, where educators are expected to be the sole experts in the classroom. This perception may prevent teacher education students from embracing AI as a collaborative partner in their teaching, potentially stifling creativity, curiosity, and experimentation (Zawacki-Richter et al., 2019). AI shaming can significantly disrupt the development of professional identity and self-concept among future educators. In teacher education, identity formation goes beyond acquiring instructional strategies—it encompasses evolving beliefs about what it means to be a competent, ethical, and innovative educator. When AI use is stigmatized, students may internalize feelings of guilt or inauthenticity, viewing reliance on AI as a breach of academic integrity or a threat to their legitimacy as future teachers. According to Erving Goffman’s (1963) theory of stigma, such labeling leads individuals to manage a "spoiled identity," which can result in concealment of AI use or disengagement from technology-enhanced learning environments.
Psychologically, this tension undermines students’ emotional safety and their self-efficacy, a key component of Bandura’s (1997) theory, which emphasizes the importance of belief in one’s capabilities to organize and execute actions required to manage prospective situations. Students who are shamed for exploring AI may develop low self-confidence in their technological competencies, limiting their willingness to innovate or experiment with emerging tools. This is particularly problematic in digital learning ecosystems where adaptability is vital.
Professionally, AI shaming curbs critical opportunities for growth. Students may avoid engaging with AI tools for fear of judgment, thereby missing chances to enhance their lesson design, personalize learning, or leverage analytics for student assessment (Hennessy et al., 2022; Zawacki-Richter et al., 2019). Furthermore, shaming reinforces outdated, hierarchical models of teaching where authority is vested solely in human expertise, discouraging more collaborative, co-constructive relationships between educators and intelligent systems. Cultural contexts also shape these dynamics. In collectivist education systems, where conformity to group norms is highly valued, students may feel more pressure to suppress AI usage if institutional attitudes are negative or ambiguous (Chen & Unal, 2023; Shahzalal & Adnan, 2022). In contrast, individualist systems may afford more freedom to experiment yet can also isolate students who deviate from perceived academic norms. Without clear institutional policies or culturally responsive discourse around AI, future educators are left to navigate these tensions alone, risking confusion, professional insecurity, and disconnection from technological progress (Delello et al., 2025).
To address these challenges, teacher education programs must cultivate psychologically safe learning environments where AI use is destigmatized and critically examined (Güneyli et al., 2024). This includes integrating AI literacy into the curriculum, offering professional development grounded in digital ethics, and establishing transparent institutional policies that recognize the evolving role of AI in teaching and learning (Funa & Gabay, 2025). Only then can students fully develop into confident, reflective, and future-ready educators. In this way, AI shaming can inhibit the professional growth of future educators, limiting their ability to explore new methods of teaching and learning that leverage AI’s potential. To combat these negative impacts, teacher education programs must actively work to create environments where the use of AI is normalized, supported, and recognized as a valuable part of professional development. This requires a shift in both the culture of education and the narratives that surround AI tools.
Rethinking the Narrative Around AI in Education
To mitigate the effects of AI shaming and to empower future educators, it is essential to rethink the narrative around AI in education. Instead of positioning AI as a threat to traditional teaching practices or as a tool that undermines academic integrity, the conversation must shift toward viewing AI as a means of enhancing professional capabilities and fostering innovation in the classroom (Mangubat et al., 2025; Xiao et al., 2025).
Empowering educators as AI facilitators. Teacher education programs should promote the idea that educators can serve as facilitators of AI use in the classroom. This approach reframes AI not as a replacement for teachers but as a complementary tool that can support diverse learning needs, streamline administrative tasks, and provide personalized feedback (Luckin et al., 2022). Educators should be encouraged to explore how AI can assist in lesson planning, assessment, and differentiated instruction, thereby empowering them to become AI leaders in education.
Promoting AI literacy in teacher education. Integrating AI literacy into teacher education curricula is a critical step in reducing AI shaming. Future educators must be equipped with the knowledge and skills to critically evaluate AI tools, understand their limitations, and use them ethically in their teaching (Miller et al., 2025). AI literacy also involves recognizing how AI can contribute to creative problem-solving and innovation in education, shifting the focus from fear of replacement to the potential for professional growth (Ding et al., 2024).
Fostering a growth mindset around AI. Teacher education programs should encourage a growth mindset toward AI, emphasizing that learning to work with AI is an ongoing process of professional development. Rather than seeing AI as a fixed skillset or as something that students must master immediately, educators can frame AI as an evolving tool that they can experiment with and learn from over time. This approach reduces the pressure on teacher education students to have all the answers and instead positions them as lifelong learners in a rapidly changing educational landscape (Dang & Liu, 2022; Ng et al., 2023).
Encouraging open dialogue about AI. Creating spaces for open dialogue about AI use can help dismantle the stigma and shame associated with these tools. Teacher education programs should facilitate discussions where students can share their experiences, both positive and negative, with AI. These conversations can help normalize AI use and highlight its practical benefits while also addressing any concerns about ethics, authenticity, or academic dishonesty.
Celebrating AI-driven innovation in teaching. Finally, teacher education programs should celebrate and showcase examples of AI-driven innovation in teaching. Highlighting case studies, classroom projects, or individual success stories where AI was used to enhance teaching and learning can shift the narrative from one of shame to one of possibility. By positioning AI as a tool for creative and innovative teaching, educators can inspire future teachers to experiment with AI in ways that align with their professional goals and the needs of their students (Walter, 2024).
Rethinking the narrative around AI in education is essential for addressing the phenomenon of AI shaming and for ensuring that future educators are equipped to thrive in technology-enhanced classrooms. Promoting AI as a tool for empowerment is crucial for addressing AI shaming and preparing future educators to thrive in technology-enhanced classrooms. Teacher education programs that emphasize AI's potential for enhancing creative and innovative teaching can help educators develop the confidence and skills necessary for its integration. Encouraging experimentation with AI tools, fostering open dialogue, and celebrating AI-driven success stories can shift the narrative toward possibility and growth. This approach ensures that future teachers are equipped not only to navigate AI but to use it effectively in ways that align with their professional aspirations and the diverse needs of their students.
Conclusion
This chapter has illuminated the complex and often underexplored phenomenon of AI shaming among teacher education students. Far from being a peripheral concern, AI shaming reflects broader anxieties about technological integration in education, professional legitimacy, and the evolving roles of teachers in AI-mediated classrooms. Student narratives revealed not just discomfort with using AI tools, but also a deeper struggle tied to identity formation, ethical uncertainty, and peer or institutional judgment. These affective dimensions demand that we go beyond mere technical training. To move forward, it is essential that teacher education programs cultivate a psychologically safe and critically reflective space where the use of generative AI is normalized, de-stigmatized, and pedagogically situated. Doing so requires not only the inclusion of AI literacy in the curriculum, but also professional development frameworks that address emotional resilience, ethical reasoning, and equity concerns surrounding AI use.
Institutions must also resist universalized approaches and instead ground interventions in local cultural contexts, recognizing that attitudes toward AI are shaped by access, norms, and pre-existing educational narratives. This is especially vital in regions where systemic constraints intersect with rapidly evolving digital expectations. Collaborative dialogue among students, educators, and policymakers is therefore crucial—not merely to manage the risks of AI, but to harness its potential responsibly and creatively. Reframing AI from a threat to a catalyst for teacher agency and innovation is key. Only then can we transform AI shaming into an opportunity for professional growth, critical consciousness, and inclusive practice in future-ready education.
Key Terms and Definitions
AI Culture: This describes the evolving set of social norms, values, beliefs, practices, and narratives surrounding the development, acceptance, and resistance to artificial intelligence technologies, especially as they influence identity, creativity, and power dynamics in educational spaces.
AI Literacy: This refers to the ability to understand, evaluate, and use artificial intelligence technologies responsibly and ethically; it involves critical awareness of how AI systems operate, their limitations, and their implications for society, education, and decision-making.
AI Shaming: This refers to the social or academic stigma directed at individuals—particularly students or educators—who openly use artificial intelligence tools in learning environments, often rooted in misconceptions about cheating, intellectual laziness, or over-reliance on technology.
Digital Literacy: This refers to the capacity to access, evaluate, create, and communicate information using digital technologies, while navigating the ethical, cultural, and technical challenges of living in an increasingly mediated world.
Generative AI: A branch of artificial intelligence that creates original content—such as text, images, code, or audio—based on learned patterns from vast datasets, with popular tools like ChatGPT and DALL·E revolutionizing human-computer interaction in education and beyond.
Pedagogical Innovation: This refers to the design and implementation of new teaching strategies, models, or technologies that enhance student engagement, learning outcomes, and inclusivity, often involving active experimentation with digital tools and learner-centered approaches.
Professional Identity Formation: This involves the internalization of values, dispositions, roles, and responsibilities that shape how pre-service teachers see themselves as future educators, influenced by cultural, institutional, and technological forces.
Teacher Education: This encompasses the formal training, professional development, and practical experiences that prepare individuals to become competent educators, focusing on both content knowledge and pedagogical strategies responsive to contemporary educational needs.