Correlation of Error Metrics in Python CS1 Courses
Fulltext URI
Document type
Additional Information
Date
Authors
Journal Title
Journal ISSN
Volume Title
Source
Publisher
Abstract
Timely and effective feedback is essential for novice programmers. Despite significant advancements in Large Language Models enabling the generation of understandable feedback in CS1 courses, determining appropriate timing for delivering feedback automatically remains a persistent challenge. Compiler messages serve as a fundamental communication channel between programmers and computers, signaling syntactic and runtime errors. Various metrics associated with these messages could potentially signify the need for intervention. Hence, it is imperative to explore the correlations among these error metrics. This study conducts a comprehensive analysis of multiple public Python datasets to evaluate error metrics and explore their correlations. The findings offer insights into a potential indicator for determining appropriate timing of feedback in programming education.
Description
Keywords
Citation
URI
Endorsement
Review
Supplemented By
Referenced By
Load citations