Skip to main content

Thomas Price

TP

Associate Professor

411 Venture IV

919-515-1286 Website

Bio

Thomas W. Price is an Associate Professor in the Department of Computer Science at NC State University. He directs the Help through INTelligent Support (HINTS) Lab (go.ncsu.edu/hintslab), which develops learning environments that use AI and data-driven features to support students as they learn.

Price’s research focuses on computing education, with an emphasis on automatically generating programming hints and feedback using student data. He has evaluated the effectiveness of innovative programming environments, including block-based and frame-based systems, and designs intelligent support tools that integrate with these technologies. His work examines how students seek and use help in both classroom and online settings.

He earned his M.S. and Ph.D. in computer science from NC State in 2015 and 2018, respectively, and was named the College of Engineering Doctoral Scholar of the Year in 2018. His research has earned Exemplary Paper Awards from the International Conference on Educational Data Mining and the ACM Technical Symposium on Computer Science Education, among others.

Office Hours
CSC 110: go.ncsu.edu/csc110oh
CSC 522: go.ncsu.edu/csc522oh

Area(s) of Expertise

Advanced Learning Technologies
Artificial Intelligence and Intelligent Agents
Data Sciences and Analytics
Graphics, Human Computer Interaction, & User Experience

Publications

View all publications

Grants

Date: 08/15/20 - 9/30/26
Amount: $1,999,578.00
Funding Agencies: National Science Foundation (NSF)

This project will develop generalizable data-driven tools that addresses the conceptually and practically complex activity of constructing adaptive support for individualized learning in STEM domains.

Date: 08/01/22 - 7/31/26
Amount: $460,757.00
Funding Agencies: National Science Foundation (NSF)

We propose to develop infrastructure to enhance and scale CSEd research by leveraging the power of data-driven AI and ML. To do so, we need to overcome 3 challenges: data (there is not enough quantity and quality of data), analytics (developing and sharing data mining and AI methods for CSEd is highly siloed and disconnected) and evaluation (AI-based interventions and tools are not easily deployed and replicated). To address these challenges, we will develop a large collection of resources including datasets, analytical approaches, reusable smart learning content, and tools and user services that enables the community to reuse the resources and contribute to the collection.

Date: 07/01/23 - 6/30/26
Amount: $525,284.00
Funding Agencies: National Science Foundation (NSF)

The goal of this work is to investigate the role of self-regulated learning (SRL) in computing education by validating and analyzing fine-grained trace data from students' interactions with programming tools. We will: 1) Conduct instructor interviews and classroom observations to identify SRL strategies related to programming tool use; 2) Instrument the tools to record student behavior, adding a priori design choices that make students' SRL strategies more visible; 3) Conduct laboratory studies and collect think-aloud protocols, then code the data with strategies identified earlier; 4) develop educational data mining techniques to identify SRL behaviors from log data; 5) deploy the SRL detectors in both introductory and more advanced CS classrooms, using the detected behaviors to validate and extend SRL theories in the domain of CS.

Date: 07/01/22 - 6/30/26
Amount: $299,998.00
Funding Agencies: National Science Foundation (NSF)

Software testing is a critical skill for computer science graduates entering technical positions. Software tests, and in particular unit tests, have several uses in education. The purpose of this proposal is to create pedagogy and tools around writing unit tests for CS3 and Software Engineering (SE) courses. Building on our preliminary work, we develop and evaluate the impact of a lightweight intervention with explicit testing strategies on the test quality of student-written tests. Then, we investigate the impact of the process of writing tests on student outcomes.

Date: 08/01/19 - 7/31/25
Amount: $749,920.00
Funding Agencies: National Science Foundation (NSF)

We will develop new data-driven methods to support students automatically as they create novel, open-ended and creative, computational artifacts. Specifically, we will develop techniques to adaptively scaffold project design and planning, detect students' programming goals, offer on-demand example-based support and tailor help to students needs through an interactive help interface. We will augment the popular Snap programming environment, which is already used in hundreds of high school and college classrooms, with these features and evaluate their effective in a series of experiments designed to explore how students approach open-ended tasks and how best to support them.

Date: 07/01/20 - 6/30/23
Amount: $174,938.00
Funding Agencies: National Science Foundation (NSF)

Existing research suggests that institutions may be able to increase the persistence of women in STEM by increasing their self-assessed STEM ability. We propose conducting both a longitudinal field experiment (in Computer Science [CS] classes) and a lab experiment (with novice programmers) to assess the impact of unambiguous, direct performance feedback on women������������������s and men������������������s self-assessed CS ability and CS persistence. Beyond the support for our research provided by social-psychological theory, mediation analysis of pilot data from a field experiment found the predicted causal chain: the intervention increased women������������������s self-assessed CS ability, which then increased women������������������s CS persistence intentions.


View all grants