Introduction

Computer programming is a vital area of research. The digitization of virtually all aspects of human life has underscored its indispensable role in modern society. According to the Asian Development Bank, (2022), programming skills are progressively becoming a requisite across various professional fields worldwide. This shift reflects a fundamental transformation in the skill sets required for professional success and societal contribution (Tuomi et al., 2018). Notably, this trend is not confined to the technology sector alone. Fields as varied as healthcare, finance, education, agriculture, and the arts, where coding proficiency was once merely beneficial (Guo, 2013), now require it as an essential skill. The International Labour Organization, (2021) corroborated this perspective, revealing that coding skills are now required in many professions (e.g., engineers, scientists, and artists). As the global economy grows more dependent on digital technologies, this shift underscores the importance of programming skills not only for personal career growth but also for enhancing national competitiveness and fostering economic development. Countries that invest in expanding their digital workforce and integrating programming education at all levels are positioning themselves to lead in the next wave of technological advancements (OECD, 2019). Recognizing this growing importance, various countries are launching national initiatives aimed at enhancing their programming education infrastructure, from grassroots coding programs for children to advanced professional retraining for adults (Asian Development Bank, 2022). Consequently, computer programming has emerged as a critical area of interest in educational research, with pressing questions about how best to teach and integrate programming skills into curricula at all levels of education.

In scientific literature, teaching and learning programming has been a common subject of many investigations (Scherer et al., 2020). These studies often focus on identifying the most effective instructional approaches and assessing their effectiveness in addressing learning difficulties. Unfortunately, the educational process in programming continues to be a significant challenge, as demonstrated by high dropout and failure rates (Garcia, 2023, 2024). This situation has spurred ongoing research efforts to find more effective instructional tools and educational approaches. Recently, the advent of ChatGPT has prompted investigations into its possible application in programming education (Deriba et al., 2024; Husain, 2024). These explorations are motivated by ChatGPT’s potential as a virtual tutor and learning companion that can provide personalized guidance and support to students. Although other large language models (LLMs) for programming exist (e.g., GitHub Copilot, Amazon CodeWhisperer, Meta Code Llama, and OpenAI Codex), ChatGPT was selected because many studies have already provided initial insights into the benefits and challenges of using it as an educational tool in programming education (Bringula, 2024). However, there remains a significant research gap in synthesizing the extent of its integration, utilization, and effectiveness within this domain. Therefore, this study aims to fill this gap by conducting a rapid literature review that answers the following research questions (RQs):

  1. What are the trending research topics related to ChatGPT in programming?
  2. How does ChatGPT assist in teaching and learning programming?
  3. What issues arise from using ChatGPT in programming instruction?

Methodology

Research Design

ChatGPT is an emerging technology in the education sector (Baig & Yadegaridehkordi, 2024). Despite the availability of other generative artificial intelligence (AI) tools specifically designed for coding, such as GitHub Copilot, Replit GhostWriter, and Amazon CodeWhisperer, ChatGPT stands out as the most widely adopted and versatile tool for both general and educational purposes. With increasing interest from the academic community, there is a pressing need to understand its applications and implications. Despite the growing curiosity, the literature on its educational usage is still developing. This limitation means conducting a comprehensive systematic review could be a potentially lengthy process with limited existing studies to analyze (Khangura et al., 2012). Given these considerations, the methodology of this chapter pivots towards a rapid review approach. A rapid review is a streamlined method for synthesizing research in a time-efficient manner (Tricco et al., 2015), ideally suited for fields where the body of literature is either emerging or in the process of rapid expansion. This approach is particularly advantageous when timely insights are necessary to inform practice or policy or when resources are constrained. Opting for a rapid review in this context allows us to gather, assess, and integrate the available research on the role of ChatGPT in programming education. Additionally, the paper incorporated a bibliometric analysis to examine the current state of the literature. This approach was integrated to provide a quantitative overview of the research trends, publication patterns, and thematic focuses within this field (Garcia, 2023). This combined methodology is deemed the most appropriate given the current scope of literature and the need for a swift yet thorough understanding of its applications and potential challenges in programming instruction.

Search Strategy

This rapid review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines to ensure a structured and transparent approach to selecting relevant articles. The initial literature search was conducted on 9 February 2024, using the Scopus database as the primary source for academic publications. To broaden the scope, the Web of Science (WoS) database was included in a second search on 16 September 2024. A third search was conducted on 9 January 2025, incorporating IEEE Xplore, to capture recent studies not yet indexed in Scopus or WoS. These additional searches were undertaken proactively, following the receipt of reviewers’ feedback, to ensure the review remained updated and comprehensive. The search strategy utilized the following query: (programming OR coding OR "computer programming") AND (education OR teaching OR learning OR instruction OR pedagogy OR curriculum OR e-learning OR "online learning" OR "distance learning") AND ChatGPT. This approach targeted publications that included these terms in their titles, abstracts, and keywords. It was designed to capture a comprehensive set of studies focusing on the use of ChatGPT within the context of programming education across various educational settings and modalities. The search did not limit the period of publication, assuming literature on ChatGPT remains sparse (Stone, 2023). The selection process focused exclusively on journal articles, conference papers, and book chapters written in English. Studies were excluded if they lacked empirical data, such as conceptual papers, opinion pieces, or editorials. Papers that discussed AI broadly without a specific focus on ChatGPT were also excluded to maintain relevance. Other types of documents, duplicate records, and studies not directly related to the topic were omitted to refine the analysis. To avoid bias and ensure a balanced exploration of both the benefits (RQ2) and issues (RQ3) associated with the use of ChatGPT in programming education, all publications that met the selection criteria, whether positive or critical, were included. Additionally, two external evaluators—both experienced programming professors—were recruited to independently review and verify the selected studies, further ensuring the neutrality and rigor of the review process. Details of the databases, search dates, and rationales are summarized in Table 1.

Database Search Date(s) Rationale
Scopus 9 February 2024
16 September 2024
9 January 2025
Primary source for academic publications due to its comprehensive coverage of peer-reviewed literature.
Web of Science 16 September 2024
9 January 2025
Added to broaden the scope and include multidisciplinary sources not indexed in Scopus.
IEEE Xplore 9 January 2025 Included to capture engineering and technology-focused studies not yet indexed in other databases.

Data Analysis

All relevant publications, along with their metadata, were saved and downloaded in a .csv format. Firstly, the file was imported into Posit Cloud (formerly known as RStudio) to conduct bibliometric analysis, which is a method used to assess research trends and patterns by examining publication data. The trending research topics related to ChatGPT in programming were identified using the bibliometrix package (RQ1). This open-source package is designed to conduct quantitative analysis in the fields of bibliometrics and scientometrics (Aria & Cuccurullo, 2017). The bibliometric analysis approach was guided by the technique toolbox proposed by Donthu et al., (2021). Subsequently, the data extracted were then processed through three principal coding phases (Creswell, 2012). This analytical process was undertaken to explore how ChatGPT is utilized in the context of teaching and learning (RQ2). The first phase, known as open coding, involves the initial examination of the data, where codes are developed to describe and categorize the information, facilitating a profound exploration without the influence of pre-existing notions. Following this phase, axial coding is applied to examine the connections and relationships among the codes identified during the open coding phase. This step categorizes the codes into broader themes or categories, delineating the relationships among them. The final phase, selective coding, further refines these categories or themes, aiming to construct a cohesive narrative. This phase focuses on identifying the core category or the primary phenomenon that the data reveals. Within this analytical framework, issues and challenges associated with the use of ChatGPT in programming instruction were also expected to surface (RQ3). These concepts would be discerned as distinct themes during the coding process, providing insights into potential obstacles and considerations for integrating ChatGPT into programming curricula.

Results

The search across databases focusing on the application of ChatGPT in programming education yielded 836 records. This number is understandably smaller compared to broader literature reviews. For comparison, a systematic review uncovered 1,091 studies that examined the integration of ChatGPT into nursing education contexts (e.g., Kucukkaya et al., 2024). The larger number of studies in the prior review remains higher compared to the present study, which highlights the more specialized focus on programming education. In adherence to PRISMA guidelines (see Figure 1), the present study selected 107 relevant documents. Within this refined dataset, most documents were published as conference papers (n = 73, or 68.22%), with journal articles comprising a smaller portion (n = 34; 31.78%). These documents were disseminated across 81 distinct sources and authored by 394 contributors, with an average collaboration rate of 4.62 authors per document. Collectively, these papers have accumulated 724 citations, yielding an average citation rate of 6.77 per document. Given that ChatGPT was introduced in 2022, it is anticipated that the surge in publications primarily occurred in 2023 and beyond.

RQ1: What are the trending research topics related to ChatGPT in programming?

When analyzing bibliographic data, Garcia, (2023) asserted that the identification of themes and topics is the most critical insight of conceptual analysis through co-word occurrence. In a linguistic sense, co-word occurrence refers to analyzing how often specific terms appear together in texts, which helps identify relationships between different research topics and their relative importance within a field. This method is particularly insightful as it illuminates which aspects of research on ChatGPT in programming are immature, established, declining, and emerging. Consequently, future research can pinpoint which areas of study need further exploration. Depicted via a composite thematic map (CTM), Figure 2 highlights the conceptual framework of the dataset. CTM is a visual tool used in bibliometric analysis to categorize research topics based on how central and well-developed they are within the field. It organizes research themes into four quadrants based on their centrality (i.e., degree of interaction) and density (i.e., internal strength). Keywords with the highest frequency of occurrence are represented as bubbles, placed according to the centrality and density metrics of their corresponding themes. In the top right quadrant, 'analysis' and 'chatgpt-generated' are identified as Motor Themes. These themes signify well-established and influential areas within the research field. The top left quadrant highlights Niche Themes, such as 'development,' 'artificial,' and 'effect,' alongside specific terms related to 'python' and 'software.' These themes are detailed and mature in their development yet remain less connected to the central discourse of the field. Basic Themes such as 'students,' 'teaching,' and 'education' are found in the bottom right quadrant. These themes represent fundamental and evolving areas that are starting to play a more central role in research. Lastly, the bottom left quadrant reveals Emerging or Declining Themes such as 'data,’ 'ai-assisted,' and 'investigating.' These themes indicate research areas that are either in the nascent stages of development or are diminishing in focus within the broader research context.

In conjunction with CTM, the present study likewise constructed a conceptual structure map (CSM) employing Multiple Correspondence Analysis (MCA). The CSM serves as another visualization tool that elucidates the interconnections among various concepts within the research. In contrast to the CTM, which categorizes themes based on their internal density and external centrality, the CSM identifies broader clusters within the research focus areas. The development of a CSM often involves the application of MCA, as suggested by prior studies (e.g., Garcia, 2023). MCA is a technique for dissecting multivariate categorical data to unearth underlying patterns and relationships. This method, when used in tandem with k-means clustering, facilitates the formation of concept clusters. As depicted in Figure 3, the resultant CSM for this dataset has automatically categorized the intellectual landscape and research focal points of ChatGPT within programming education into three primary clusters. The largest cluster is highlighted in blue, which encompasses a range of terms that primarily relate to the educational domain. Keywords such as 'students,' 'courses,' 'education,' and 'teaching' signal a strong research emphasis on pedagogical aspects (e.g., Zheng, 2023). This cluster signifies the broad impact and considerations of ChatGPT in educational settings. The second cluster is highlighted in blue, which appears to concentrate on evaluative aspects of research, involving terms like 'evaluation,' 'performance,' 'models,' and 'tasks.' This cluster suggests a focus on assessing the effectiveness, outcomes, and applications of ChatGPT in programming education, examining both the process and the tools involved (e.g., Piccolo et al., 2023). The third cluster, marked in green, is associated with keywords such as 'software,' 'engineering,' and 'development.' This indicates a cluster that is centered around the developmental aspects of ChatGPT, possibly exploring the technological underpinnings, software engineering perspectives, and the progression of ChatGPT's capabilities in educational programming environments (e.g., Kuramitsu et al., 2023).

RQ2: How does ChatGPT assist in teaching and learning programming?

Personalized Tutoring

Numerous research efforts have pinpointed various instructional strategies where ChatGPT serves to bolster computer programming education. A notably prevalent strategy is its use in personalized tutoring (Wieser et al., 2023), which is in line with the concept of a “More Knowledgeable Other” (MKO). Within Vygotsky's notion of the Zone of Proximal Development (ZPD), an MKO refers to someone possessing more knowledge or skill in a given domain. Importantly, this role is not limited to humans (Jarrett, 2022), allowing ChatGPT to serve as an MKO. As a virtual tutor embodying this concept, ChatGPT offers individualized feedback and detailed explanations (Randall et al., 2024). It can scrutinize student-written code, identify mistakes, and provide context-specific guidance to enhance their coding proficiency (Chen et al., 2023; Kosar et al., 2024). This personalized interaction not only facilitates immediate error correction but also encourages students to refine their coding techniques (Leinonen et al., 2023). Moreover, it helps in understanding best practices and developing a more robust grasp of programming concepts. This approach fosters a tailored, supportive learning experience that leverages the principles of scaffolded learning and self-directed programming (Sun et al., 2024). The assertion of Garcia, (2023) on the critical importance of instructional guidance, notably absent in many computer science courses, underscores the value of integrating such AI-driven personalized tutoring. By assuming the role of an additional MKO, ChatGPT can provide individualized support to students, a task that would otherwise be logistically challenging for teachers to offer in classrooms with high student-to-teacher ratios. In a more innovative example, Yang et al., (2024) utilized PyTutor, a ChatGPT-based Intelligent Tutoring System, and observed that students with lower initial knowledge exhibited significantly higher engagement, improved completion rates, and greater success rates in both in-class and after-class programming exercises. This approach underscores the potential of integrating ChatGPT with adaptive technologies to address diverse learner needs and enhance overall learning outcomes.

Knowledge Reinforcement

While some research suggests that AI code generators may not scale efficiently as a teaching and learning tool (Popovici, 2023), other studies have noted its effectiveness in the programming instruction process (Husain, 2024; Kazemitabaar et al., 2023). For instance, a common learning difficulty in programming is understanding abstract concepts, which can be challenging for novice programmers and students to grasp only through theoretical explanations (Garcia, 2024; Tsai, 2019). Although teachers address these topics in the classroom, students can engage with ChatGPT outside class hours. This practice serves as a reinforcement to what they have learned and provides an additional opportunity to clarify any persisting doubts or to solidify their understanding of complex topics. Supporting this approach, Yilmaz and Yilmaz, (2023) and Sun et al., (2024) noted that ChatGPT can provide programming examples with clear explanations as well as resources on advanced topics that can significantly contribute to the learning reinforcement process. It can also be used to aid memory recall, as the active retrieval practice helps to strengthen memory retention (Bai et al., 2023). By reinforcing learning outside of the classroom, teachers can ensure that students have a more robust understanding of the material, which can lead to more productive classroom sessions. When students come to class with a clearer grasp of programming concepts, teachers can spend less time re-explaining foundational topics and more time on advanced material and interactive learning experiences.

Instructional Materials

Another advantage of using ChatGPT in teaching and learning programming is its capability to generate customized instructional materials (Bringula, 2024; Hartley et al., 2024; Husain, 2024). Teachers can effortlessly generate a wide array of lesson plans, coding exercises, programming quizzes, and example projects tailored to their curriculum's specific needs and difficulty levels (Zheng, 2023). It also allows them to diversify their teaching materials and provide students with a broad spectrum of coding challenges that cater to various skill levels and learning styles. Further supporting this perspective, Speth et al., (2023) conducted a study on the deployment of ChatGPT-generated exercises within beginner and intermediate Java programming courses. Their findings indicate that the exercises produced by ChatGPT are not only suitable for academic settings but also indistinguishable from those created by humans. This finding aligns with research conducted by Spasić and Janković, (2023), who assessed the capability of ChatGPT in devising programming lesson plans for preschoolers. They observed that its ability to mimic human-like responses is remarkably extensive. In each of the three prompt scenarios they examined, the responses were not only relevant to the assigned task and theme but also resulted in lesson plans that adhered closely to established literature guidelines. The structure of these lesson plans likewise conformed to recommended best practices, encompassing clear objectives, necessary materials, and detailed lesson outlines. Gutierrez et al., (2024) reported similar findings after testing ChatGPT's ability to generate programming exercises. Their results consistently featured high-quality machine problem descriptions, accurate code solutions, and well-structured code. This finding suggests that students can be exposed to a wider range of problems, providing them with additional practice to improve their programming knowledge and skills.

Source Code Generation

The capability of ChatGPT to generate code is regarded as a transformative tool for programmers, educators, and students. Recently, the release of the canvas feature has made coding in ChatGPT even more accessible and intuitive (see Figure 4). This new addition allows users to seamlessly edit and iterate on code in a collaborative environment. However, the automatic code generation feature is often met with criticism. For instance, Bringula, (2024) argues that this capability may encourage overdependence on this technology and potentially undermine the development of critical thinking and problem-solving skills. There are also concerns regarding academic dishonesty (Sun et al., 2024), as students may be tempted to submit AI-generated code as their own (Rose et al., 2023). Despite these concerns, ChatGPT can provide substantial educational benefits when used with integrity. For example, Jacques, (2023) posited that students can use ChatGPT to produce a variety of solutions and compared the differences in efficiency, approach, and style. In educational research, there is substantial evidence supporting the benefits of learning from and comparing across multiple examples (Fiorella, 2023). This comparison can also extend to juxtaposing AI-generated code with that crafted by humans (Garcia et al., 2023), which would offer insights into the readability and maintainability of code. Moreover, students can be encouraged to use AI to generate an initial solution and then challenge themselves to devise an alternative code independently. As an example, after ChatGPT provides a Python script to sort a list of numbers, students could create a different sorting algorithm manually. Another use of AI-generated code involves educators prompting students to describe what it does and why it works. Corney et al., (2014) and Vieira et al., (2017) suggested that by requiring students to articulate the functionality and rationale behind the code, they are compelled to engage in critical thinking about the code's purpose and its operational mechanisms. In the context of big data analytics education, Park and Kim, (2025) added that students utilizing ChatGPT outperformed their peers who relied on Stack Overflow or worked without any external support.

Immediate Feedback

Immediate feedback plays a pivotal role in the scaffolding process for effective learning, particularly in the context of programming education (Garcia, 2021; Shaka et al., 2023). The promptness with which learners receive feedback on their programming efforts is crucial for their development. Marwan et al., (2020) highlighted the significance of this immediacy, noting that quick feedback can significantly enhance the engagement of novice programmers and bolster their determination to continue their studies in computer science. Expanding on this concept, Yilmaz and Yilmaz, (2023) pointed out the advantages of using ChatGPT in providing instantaneous feedback, which can accelerate the learning process and enhance student comprehension. At the very least, the ease of use of ChatGPT can encourage engagement from students who might typically be reluctant to seek help (Maher et al., 2023). Such an enhancement in programming education is attributed to the ability of ChatGPT to offer clear explanations of code and support in debugging (Chen et al., 2023; Randall et al., 2024). Furthermore, Sun et al., (2024) highlighted that students can directly input their code snippets or detailed error messages into ChatGPT, which significantly streamlines the acquisition of tailored feedback. Their research found that the provision of such personalized feedback has been instrumental in enhancing students' programming education. Such immediate and customized feedback additionally aids teachers by enhancing the efficiency of the feedback process (Rose et al., 2023). This optimization allows educators to ensure that students receive precise guidance tailored to their specific coding challenges without the need for extensive manual review. By integrating ChatGPT into the feedback loop, teachers can allocate more time to other critical aspects of education.

Assessment Assistance

Evaluating student code is a complex and time-consuming task. Teachers often face significant challenges in this area, as they must not only determine the correctness of the code but also assess its efficiency, readability, and adherence to best practices (Garcia et al., 2022). ChatGPT transcends its role in generating assessments by offering substantial aid in the evaluation process itself. In corroborating this idea, Wieser et al., (2023) investigated the effectiveness of ChatGPT's method for assessments. Their findings indicated that the distribution of grades by ChatGPT was uniform and logical, accompanied by further explanations that shed light on the foundational aspects of its grading criteria. Moreover, its capability to scrutinize code enables a detailed analysis that encompasses not just the detection of errors but also assessments of the code's optimization, clarity, and alignment with professional coding standards (Chen et al., 2023; Yan et al., 2023). Jukiewicz, (2024) added that, unlike teachers who require several hours to grade assessments, ChatGPT can complete the grading process much faster, approximately in 9.5 seconds for a single query. There is also a positive aspect for students in using ChatGPT to assess their own work. Li et al., (2023) observed that the code produced by ChatGPT tends to align more closely with established coding conventions than that written by humans. For example, when coding in C++ programming language, ChatGPT typically opts for the standard "endl" line terminator instead of the "\n" newline character. It also prioritizes readability by using descriptive words for identifiers, as opposed to humans who might opt for abbreviations or letters that can be unclear in their meaning. When students receive such constructive feedback from ChatGPT and follow these exemplars of clarity and convention in their coding activities, it can lead to submissions that are easier for teachers to assess (Husain, 2024).

RQ3: What issues arise from using ChatGPT in programming instruction?

Academic Dishonesty

Despite the numerous potential benefits of integrating ChatGPT into programming instruction, various studies have identified challenges and issues that can emerge. Academic dishonesty, particularly in the form of code plagiarism, stands out as a prevalent issue that could undermine the integrity of programming education (Akçapınar & Sidan, 2024; Lau & Guo, 2023; Rose et al., 2023). As noted by Chen et al., (2024), the use of generative AI tools in coding has contributed to a noticeable rise in plagiarism cases. More specifically, there has been a shift from ‘suspected online plagiarism hubs’ like Chegg or CourseHero to ChatGPT. This concern stems from the ease with which students can generate source code using ChatGPT, potentially leading to instances where the work submitted is not genuinely their own. Such a situation complicates the assessment of students' actual understanding and capability in coding, as it blurs the lines between learning assistance and outright plagiarism (Budhiraja et al., 2024; Gasiba et al., 2023). Even seasoned instructors may struggle to detect AI-generated code, making it harder to assess students' true abilities (Ellis et al., 2024). The availability of instant solutions can also tempt students to bypass the critical thinking and problem-solving processes essential for their development as competent programmers. As students become used to code generation, Popovici, (2023) argued that it is imperative to develop new programming competencies. For instance, teachers may educate students on how to review AI-generated code for errors, optimize it for efficiency, and ensure it meets project requirements and coding standards. Such competencies ensure that students can effectively use AI as a tool rather than a substitute. Savelka et al., (2023) also asserted the importance of cultivating an educational environment where the emphasis is placed on learning and personal development rather than solely on achieving the correct answers. Emphasizing the importance of academic integrity and ethical conduct in the classroom is especially crucial in an era of LLMs and generative AI applications (Bozkurt et al., 2024).

Ethical Issues

Ethical concerns represent a consistent theme across various research studies (Liu, 2023; Petrovska et al., 2024; York, 2023). This focus on ethics underscores the importance of addressing moral questions that arise in the context of using technologies like ChatGPT in education. These issues encompass a range of considerations, from the origins and reliability of the generated code to the imperative of nurturing learner autonomy to avoid harmful reliance. Silva et al., (2024) emphasized the significance of promoting a profound and self-sufficient grasp of programming principles, reflecting the broader ethical challenge of ensuring students develop their skills without undue dependence on AI tools. According to Feng et al., (2023), there have been specific instances where ChatGPT was tasked with creating a Python function to predict an individual's seniority or evaluate their scientific competence. Unfortunately, these instances revealed that the code produced by ChatGPT could manifest biases related to demographic factors, raising serious ethical questions about the objectivity and fairness of AI-generated content. This issue aligns with broader concerns about AI models inheriting biases present in their training data, which can unintentionally perpetuate stereotypes or discriminatory outcomes. Such biases challenge the ethical deployment of AI in sensitive educational contexts, where fairness and equity are paramount. Additionally, concerns surrounding data privacy and the potential for legal repercussions further complicate the ethical landscape. Valový and Buchalcevova, (2023) pointed out that the integration of ChatGPT into educational practices introduces risks related to the handling and protection of sensitive information, underscoring the necessity for robust data governance measures. All these studies posited that addressing these challenges requires a concerted effort to implement ethical guidelines and practices that respect the integrity of the learning process while harnessing the potential of AI tools like ChatGPT.

Overreliance on ChatGPT

There is empirical evidence that ChatGPT is a proficient coder (Chen et al., 2023; Li et al., 2023; Ouh et al., 2023; Piccolo et al., 2023). This code generation capability, however, risks becoming a double-edged sword as students might increasingly turn to ChatGPT for swift solutions (Budhiraja et al., 2024; Jošt et al., 2024). Rather than engaging with programming problems independently or leveraging critical thinking skills acquired in the classroom, there is a tendency among students to depend on ChatGPT for immediate answers (Joshi et al., 2024). Teachers have expressed concerns that students may gain fewer programming skills and experience a decline in both learning and code quality due to their overreliance on ChatGPT (Groothuijsen et al., 2024). This pattern of dependency could escalate to a stage where students consult ChatGPT for solutions to tasks they are well-equipped to solve by themselves. As pointed out by Silva et al., (2024), the issue of becoming overly dependent on generative AI tools for code generation is concerning. The convenience provided by ChatGPT might cause students to neglect nurturing essential programming skills. Kazemitabaar et al., (2023) observed that excessive reliance on such tools could hinder the learning process. As a potential countermeasure, they suggested implementing measures that require learners to interact with the AI-generated code in educational ways before using it directly. For example, incorporating a preliminary task such as tackling a Parsons problem based on the AI-generated code or responding to multiple-choice queries that cover the concepts present in the generated code (e.g., grasping the concept of nested loops), could promote an active learning environment. These initial activities aim to immerse students in the subject matter prior to their use of AI-generated solutions, presenting creative methods to boost active learning and improve students’ understanding in a customized fashion.

Impaired Critical Thinking

A detrimental consequence of excessive dependence on ChatGPT is the potential degradation of critical thinking skills (Bai et al., 2023). Critical thinking is vital in programming, especially as it equips students with the ability to methodically approach problem-solving (Garcia, 2023). With ChatGPT providing instantaneous solutions, it poses a risk of students relying on these answers without fully understanding the foundational principles they are based on. Such reliance can lead to a superficial understanding of key programming concepts, as genuine critical thinking involves not only identifying solutions but also grasping the reasons behind them. Furthermore, becoming a proficient programmer is inherently linked to the experience of making mistakes and mastering the art of debugging. Relying on ChatGPT to consistently provide corrections or suggest error-free code may hinder the development of essential debugging skills in students, which demand an in-depth comprehension of the code, logical reasoning, and attention to detail. Husain, (2024) pointed out that instructors need to put in extra effort to design programming assignments that necessitate the application of programming knowledge and critical thinking rather than setting straightforward or trivial tasks that could be easily completed through direct queries. Sharing the same perspective, Ellis et al., (2024) recommended making assessment prompts more general. This approach aims to challenge students who lack a programming understanding, preventing them from achieving a passing score by simply inputting the assignment instructions into ChatGPT. The goal is to encourage students to engage critically with their programming tasks. Additionally, Yilmaz and Yilmaz, (2023) recommended assigning complex and modular programming challenges to encourage students to synthesize complete solutions from modular responses. They argue that tackling complex and unstructured problems plays a pivotal role in cultivating students' computational thinking abilities (Cheng et al., 2023).

Technical Limitations

Drawing parallels with findings from various disciplines (Dave et al., 2023) and reviews (Cong-Lem et al., 2024), it is evident that ChatGPT shares similar technical limitations when applied to programming instruction. For instance, its text-based nature and lack of specialized programming features, unlike those found in Integrated Development Environments (IDEs), present significant challenges. The absence of these coding features in ChatGPT means it cannot offer the same level of support for the iterative and experimental processes that are fundamental to programming. Researchers such as Ouh et al., (2023), Yilmaz and Yilmaz, (2023), Shaka et al., (2023), and Budhiraja et al., (2024) have further highlighted the issue of accuracy in the code generated by ChatGPT. They pointed out that while ChatGPT can produce code quickly, it sometimes lacks the precision and reliability required for programming tasks. This inconsistency can mislead learners, especially novices, into adopting incorrect programming practices or misunderstanding fundamental concepts. Expanding on this limitation, Piccolo et al., (2023) emphasized ChatGPT's shortcoming as a non-IDE tool, specifically its inability to execute code. Therefore, ChatGPT often cannot predict code outcomes, underlining the indispensable role of human feedback in the learning process. This gap highlights a critical area where direct interaction with code and immediate output observation, features intrinsic to IDEs, are vital for comprehensive programming education. Lastly, DePalma et al., (2024) highlighted that ChatGPT faces challenges in understanding the broader context in which individual code segments are applied. These scenarios often lead ChatGPT to offer suggestions based on misinterpretations or incorrect assumptions about the overall structure or functionality of the code. Consequently, it may sometimes propose solutions that are irrelevant or unnecessary.

Discussion and implications

Propelled by advancements in AI technologies, technology-enhanced programming education has witnessed remarkable growth recently. The introduction of ChatGPT has opened new avenues for exploration, with numerous studies beginning to investigate its potential implications and applications (Bringula, 2024; Silva et al., 2024). Despite the growing interest, there is a notable research gap in understanding the scope of ChatGPT's integration, utilization, and impact within programming education. This rapid review seeks to bridge this gap by examining the current literature to identify trending research topics surrounding the use of ChatGPT in computer programming, evaluate how ChatGPT supports teaching and learning in programming, and discern the challenges that emerge from its application in programming instruction. As highlighted in the reviewed studies, teachers and students are already leveraging ChatGPT for their programming activities, making it impractical to prohibit its use entirely. Despite calls for limiting or outright banning its access (Lau & Guo, 2023), the trend unmistakably suggests that ChatGPT will remain a fixture in the educational landscape.

To illustrate the current literature surrounding the use of ChatGPT in programming, trending research topics were identified, highlighting key areas of focus and development. Based on the results, it is noticeable that the identified topics and themes reflect a broader historical evolution of AI integration into education. First, the motor themes emerging from the literature indicate the growing significance of AI-generated content and automated analysis tools (Chen et al., 2023; Randall et al., 2024), which have become central to modern educational practices. Historically, AI began as a niche area with limited application in education, primarily focused on adaptive learning systems and intelligent tutoring. However, as AI technologies matured, particularly with the advent of LLMs like ChatGPT, their role expanded, influencing curriculum development (Speth et al., 2023), student assessments (Wieser et al., 2023), and personalized learning experiences (Randall et al., 2024). Meanwhile, the Niche Themes reveal that while AI techniques are becoming increasingly specialized and sophisticated, their integration into broader educational discourse remains limited. The current literature suggests that certain advanced applications of AI, particularly in technical areas such as programming and software development, are highly developed but remain peripheral to the broader educational landscape. For example, while ChatGPT is being used by some researchers and teachers in experimental or supplemental capacities, as well as by students outside formal classroom settings, it has yet to be fully embedded into general education settings (e.g., introductory computer science courses; Mahon et al., 2024). This current setup indicates that although the technology is being explored, its systematic integration into formal education frameworks remains limited.

In terms of teaching and learning, the implications of integrating ChatGPT into programming education are multifaceted. From an educational curriculum standpoint, the integration of ChatGPT into programming education necessitates a thoughtful revision of existing curricula (Randall et al., 2024). This necessity for curricular adaptation stems from the evolving nature of programming and computer science fields, where AI and machine learning are becoming increasingly central. Integrating ChatGPT and other emerging generative AI tools and LLMs into the curriculum can provide students with a more contemporary learning experience that mirrors the real-world applications and challenges they are likely to encounter in their careers. Moreover, as the job market for future programmers and developers is expected to demand proficiency in AI competencies and prompt engineering skills, curriculum designers are urged to weave AI literacy, ethics, and a comprehensive understanding of AI's societal, legal, and security implications into programming courses. However, it is important to note that while these tools may enhance the learning experience, they do not necessarily guarantee improved academic performance, as studies have shown differing results regarding their effectiveness in fostering deeper understanding and skill mastery. For example, Yilmaz and Yilmaz, (2023) reported that using ChatGPT in programming instruction can improve students' coding skills, whereas Sun et al., (2024) found no statistically significant difference between students utilizing ChatGPT and those engaged in self-directed, facilitated programming. Empirical evidence also shows that ChatGPT performs well with easy and medium coding problems but struggles with more complex tasks (Bucaioni et al., 2024). Nevertheless, this broadened educational approach is vital in equipping students with the diverse skill set needed to navigate the rapidly changing landscape, marked by the emergence of new AI tools and their deep integration into various business processes.

For teachers, the practical implications involve designing educational strategies that harness the potential of ChatGPT to enhance standard teaching practices without compromising the foundational principles of self-guided learning and critical reasoning. While ChatGPT offers advantages such as tailored instruction and automated code creation, it also poses the risk of fostering dependency and diminishing critical thinking skills. Moreover, some evidence suggests that students show a preference for teacher-created explanations over AI-generated ones in fundamental programming concepts, such as sequence, selection, and iteration (Lee & Song, 2024). Consequently, teachers are advised to judiciously integrate AI-based educational tools and resources into their teaching frameworks. For example, teachers can improve students’ comprehension of course material by presenting programming demonstrations that feature ChatGPT. It is also crucial for teachers to vigilantly check for signs of academic misconduct. Teachers may use software like the Measure of Software Similarity (MOSS) and Codequiry to aid in identifying potential code plagiarism (Qureshi, 2023). In the absence of robust plagiarism tools, alternative methods can be implemented. Popovici, (2023) suggested requiring students to modify small, singular lines of code under teacher supervision to earn points. In this approach, teachers may deliberately introduce a compile error in one line of code and challenge students to rectify it. Another effective strategy involves requiring students to annotate their code with comments (Ellis et al., 2024). Mandating specific commentary throughout their programming assignments can assist teachers in discerning students’ understanding and originality, thereby enhancing the educational value of assignments and promoting academic integrity.

For students, the integration of ChatGPT into programming education opens the door to an expansive reservoir of knowledge and assistance. This tool can instantly clarify doubts, provide explanations for complex programming concepts, and even generate code examples. However, this advantage comes with the essential condition of upholding academic integrity and a deep-seated commitment to genuinely grasp the fundamental principles of programming. The implications of this scenario for students are multifaceted. On the one hand, ChatGPT serves as a powerful educational ally that potentially democratizes access to high-quality programming education by making it more accessible and personalized. Students who might otherwise struggle with traditional learning resources or face barriers due to geographical and economic constraints can benefit from immediate, tailored support. On the other hand, the ease of access to solutions poses a risk of diminishing the incentive for students to engage deeply with the material. The temptation to use ChatGPT as a shortcut, bypassing the effortful process of coding from scratch, debugging, and internalizing the logic behind programming tasks, could impair the development of critical thinking and problem-solving skills. These skills are not only vital for academic success but are also indispensable in the real-world programming environment, where understanding the 'why' behind the 'how' is crucial (Berrezueta-Guzman & Krusche, 2023). Furthermore, the reliance on ChatGPT for immediate answers could potentially undermine the development of perseverance and resilience—qualities that are nurtured through facing and overcoming challenges. Programming is as much about dealing with frustration and learning from failure as it is about celebrating success. Encountering and working through errors is a fundamental part of the learning process that fosters growth, adaptability, and innovation.

Overall, while ChatGPT offers substantial benefits in programming education, the limitations also necessitate a balanced approach to its use. The education sector must navigate this new terrain carefully to ensure that the technology serves as a complement to traditional learning methods rather than a replacement (Bozkurt et al., 2024). Emphasizing the importance of academic integrity, fostering a culture of honest inquiry, and encouraging students to explore beyond surface-level understanding are crucial steps in leveraging ChatGPT's potential while safeguarding the integrity and depth of programming education. Given that this study constitutes a rapid review, it is imperative that future research endeavors undertake a more comprehensive systematic review and meta-analysis, utilizing a more developed corpus of literature. As the academic discourse around ChatGPT and its implications for programming education continues to evolve, a systematic review will offer a more detailed understanding of its impact. Such research efforts should ideally be pursued once the literature on ChatGPT in programming education becomes more mature. This future research will be critical in guiding educators, policymakers, and curriculum developers in making informed decisions about the integration of AI tools like ChatGPT into educational frameworks. Nevertheless, it remains uncertain whether scholarly interest in using ChatGPT for programming instruction is expected to maintain an upward trajectory in subsequent years. This uncertainty only strengthens the necessity for future research to re-examine the literature to ascertain the trajectory of publication trends. The academic community must maintain a continuous progression in research to enhance our comprehension of this tool and to effectively tackle emerging challenges. By consistently updating the educational field with fresh empirical evidence, the benefits of ChatGPT can be more efficiently exploited.

Another research avenue identified in this review pertains to the need for more specific, targeted studies that address the current gaps in ChatGPT's use within programming education. Several studies have highlighted vital research areas that require further exploration, such as the impact on teaching effectiveness (Bringula, 2024), the influence on students’ engagement and class collaboration (Husain, 2024), methods for enhancing prompt utilization (Sun et al., 2024), and the development of ethical guidelines for its use (Petrovska et al., 2024). To advance the field more directly, future research could focus on experimental or observational studies designed to answer specific questions raised by these themes. For example, an experimental study could examine how different instructional designs that integrate ChatGPT affect teaching effectiveness, comparing various pedagogical approaches and assessing their impact on learning outcomes. Additionally, observational studies could explore how ChatGPT influences student engagement and collaboration within group-based programming tasks, examining both in-class and out-of-class interactions. Future research could also involve design-based research aimed at refining prompt engineering techniques, where iterative cycles of testing and refinement are used to optimize how educators and students interact with ChatGPT to enhance learning experiences. There is also a growing need for qualitative research to explore the ethical implications of ChatGPT’s deployment in education. This could involve interviews or focus groups with educators, students, and policymakers to develop ethical frameworks and guidelines that address concerns such as academic integrity, data privacy, and bias in AI-generated content. By addressing these specific research questions and employing diverse methodologies, future studies can provide deeper insights into ChatGPT's role in programming education and contribute to the development of effective, ethical, and impactful AI-enhanced teaching and learning practices.

Conclusion

This rapid review into the utilization of ChatGPT in programming education reveals a research landscape teeming with potential yet fraught with issues. By scrutinizing the current literature, this study has illuminated the promising role of ChatGPT in enhancing programming instruction. Its applications include providing personalized tutoring, reinforcing knowledge, creating instructional materials, generating source code, providing immediate feedback, and aiding in assessment. These uses underscore ChatGPT's capacity to revolutionize programming education by enhancing both teaching and learning experiences. However, the adoption of this technology is not without concerns, such as the potential for academic dishonesty, ethical dilemmas, the danger of reducing critical thinking skills, becoming too dependent on technology, and facing technical hurdles. The insights derived from this review necessitate a measured and thoughtful integration of ChatGPT within programming curricula. Furthermore, the recommendations and implications laid out herein serve as a roadmap for stakeholders across the educational spectrum. As we continuously explore the integration of ChatGPT into programming education, it is imperative that further evaluation, adaptation, and dialogue among all educational stakeholders take place. This collaborative effort will pave the way for a balanced and forward-thinking approach to incorporating AI into our educational frameworks.