Skip to main content

Poster Session

We received great response to the call for high-quality poster submissions from members of the Northwestern community and beyond describing recent or ongoing research at the intersection of artificial intelligence, education, and the learning sciences. Accepted poster titles and abstracts are listed below.

Accepted Posters

The following posters will be on display all day May 8th. We will have formal poster sessions during the coffee break from 3:30 - 4 p.m. and reception from 5:40 - 7:30 p.m.

Learning Agent-based Modeling with LLM Companions: Experiences of Novices and Experts Using ChatGPT & NetLogo Chat

John Chen, Xi Lu, Yuzhou Du, Michael Rejtig, Ruth Bagley, Mike Horn, Uri Wilensky, Northwestern University

Large Language Models (LLMs) are fundamentally changing computer programming practices, yet their impact on agent-based modeling (ABM) remains unexplored. This study investigates how LLMs can support NetLogo, a widely-used ABM programming language. The researchers designed NetLogo Chat, an LLM-based interface integrated with the NetLogo environment. To understand user perceptions and needs, they interviewed 30 participants from diverse backgrounds including academia, industry, and graduate schools globally. Their findings revealed significant differences between experts and novices. Experts reported more perceived benefits and were more willing to incorporate LLMs into their workflows. These differences stemmed from a knowledge gap: experts approached LLMs strategically, breaking tasks into smaller components and critically evaluating AI outputs, while novices requested complete solutions and struggled to debug AI-generated code. The researchers identified three key needs for LLM-based ABM interfaces: (1) guidance that adapts to users' knowledge levels, (2) personalization that accommodates diverse learning preferences, and (3) integration that supports the entire modeling process beyond code generation. Bridging the knowledge gap is crucial for creating better AI-assisted programming environments. This study contributes valuable insights for designing LLM-based programming interfaces that can effectively support both novices and experts in computational modeling.

Processes Matter: How ML/GAI Approaches Could Support Open Qualitative Coding of Online Discourse Datasets

John Chen, Alexandros Lotsos, Grace Wang, Lexie Zhao, Bruce Sherin, Uri Wilensky, Michael Horn, Northwestern University

Open coding, a key inductive step in qualitative analysis, discovers concepts from human datasets. It is widely used in educational research. Capturing extensive ""coding moments"" is challenging with large discourse datasets. While studies explore machine learning (ML)/Generative AI (GAI)'s potential for open coding, few evaluation studies exist. We compared open coding results from five ML/GAI approaches and four human coders using online chat messages between teachers and designers. After initial analysis, we identified that line-by-line approaches produced finer-grained codes closer to theoretical expectations in open coding. Our systematic analysis reveals complementary potential between humans and AI. Line-by-line AI approaches effectively identify content-based codes, while humans excel in interpreting conversational dynamics. We found machine coders impressively identified actions, experiences, or intentions from message contents, but were less likely to produce codes grounded in conversational dynamics. Analytical processes are essential for both human and machine coders to produce high-quality outcomes. We suggest: (1) researchers should integrate ML/GAI approaches by matching them with analytical processes, (2) better approaches may be developed by integrating human coding processes, and (3) researchers should consider using ML/GAI approaches as parallel co-coders rather than replacements for human analysis.

Towards an AI system for support student self-assessments of programming ability

Melissa Chen, Eleanor O'Rourke, Northwestern University

Introductory computer science courses struggle to retain students, in part due to students’ inaccurate perceptions of the programming process. Students often negatively assess their abilities in moments during the programming session that are effective practices (e.g., searching for syntax) due to their high self-expectations, understanding of typical practice, and low confidence in their ability to recover from setbacks. However, they often do not ground these reasons in sources of information, like hearing from a professor. Furthermore, students can develop different perceptions of programming practice despite being in the same classes. Students who frequently self-assess negatively in these moments also tend to have lower self-efficacy and persistence in computing. This motivates the need for scalable yet individualized interventions that explicitly provide information about the programming process to support more accurate self-assessments. We propose an AI system that delivers personalized feedback in real time and in students’ programming environments. When the system detects a self-assessment moment, it provides information about expected and effective programming practices and how to adopt them to inform students’ expectations for the programming session and support their confidence in their ability. We present the design and preliminary evaluation of this system, towards contributing design principles for AI-powered technology that supports student self-assessments.

Uses of artificial intelligence for learning and doing in the workplace: A case study drawn from the coffee industry

Bradley Davey, Northwestern University, Corey Liam, Chicago Coffee Company

In February 2025 the price of raw coffee reached its all-time high: $4.30/lb The rapid increase of prices throughout 2024 to their peak in early 2025 owed to a blistering number of factors, with massive droughts in Brazil and Vietnam alongside heightened consumer demand playing key roles. This poster focuses on Corey, the coffee buyer for Chicago Coffee Company (CCC), among whom we've conducted ethnographic fieldwork for over two years, and his use of artificial intelligence (AI). How, for example, does Corey use AI to forecast global weather and consumer demand to execute hundred-thousand-dollar orders that shape CCC’s future? We answer this through vignettes depicting Corey’s on-the-job uses of AI, which he (as co-author) will elaborate in person. Corey represents a perspicuous case of AI in real-world work, clarifying speculations about AI’s role in workplaces and education. The case also offers insights for learning scientists: Corey started his job with little training, just weeks before coffee prices soared. With few experts available, he relied on AI to learn. "AI's also a learning tool. I’ve used it to study plant biology, agricultural cycles, and build out consumption reports, logistics strategies, and cost modeling frameworks. I’ve developed systems to identify patterns in our inventory and structure purchasing around projected demand with more precision than would be possible manually." How does Corey use AI not just to do his job, but learn to do it?

Drag & Prompt: Creating Contextual Computing Tutorials with Instructor-Structured LLM Generation

Mehmet Arif Demirtas, Kathryn Cunningham, UIUC

To serve diverse learner motivations, the goal of computer science (CS) education should expand beyond training software developers. Learners might be interested in using programming as a supplemental tool (i.e. end-user programmers) or exploring programming applications to communicate with their technical coworkers (i.e. conversational programmers). AI-assisted coding tools might further increase the demand for these goals. However, while there has been some work on supporting these learning goals, existing CS instructional material primarily focuses on code-writing and software development, especially beyond introductory topics. To address this gap, we propose an instructor-in-the-loop tool that assists instructors in creating online tutorials with LLMs. These tutorials follow purpose-first programming (PFP), which is an instructional approach that breaks down complex programs into commonly used code patterns with clearly stated purposes. With our tool, the instructor creates an outline by composing complex examples using these code patterns. An LLM fleshes out this outline by contextualizing these examples to motivate students and suggesting questions to encourage active learning, and the instructor refines the final output. By using common code patterns as a constraint to generate structured output, we reduce LLM hallucinations and provide control over learning goals to the instructor. Our poster shows the ongoing work on the design and classroom evaluation of the tool.

Adolescent LLM use in the college admissions essay-writing process

Aidan Z. Fitzsimons, Elizabeth M. Gerber, Duri Long, Northwestern University

US adolescents are readily adopting general-purpose large language models (LLMs), powered by AI, to write college application essays. These essays serve a dual purpose for developing adolescents: 1) they help applicants communicate their personalities and goals when competing for selective college spots; perhaps more importantly, 2) they help adolescents crystalize their developing self-concept and narrative identity. However, AI use in this context has not been explored.As adolescents increasingly use generative AI to write personal narratives, we know little about applicant use patterns and motivations. In interviews with 20 recent US college applicants, adolescents report feeling pressured to use AI to write essays to compete in an opaque process with peers who they suspect are using it for drafting essays. We chart applicant AI use across stages of the cognitive process, and to varied extents. Our knowledge of how AI constructs personal narratives is also limited. Through a mixed-methods audit of 160 AI-generated college application essays created with ChatGPT, we find that prompts referencing marginalized gender identities tend to reify harmful gendered master narratives. We are co-designing intelligent coaching support tools that 1) emphasize a self-concept development process and that 2) do not reify harmful master cultural narratives, using participatory design methods with nonprofit partners, applicants, coaches, and admissions offices.

PLAID: Supporting Computing Instructors to Identify Domain-Specific Programming Plans at Scale

Yoshee Jain, Mehmet Arif Demirtas, Kathryn Cunningham, UIUC

Pedagogical approaches focusing on stereotypical code solutions, known as programming plans, can increase problem-solving ability and motivate diverse learners. However, plan-focused pedagogies are rarely used beyond introductory programming. Our formative study (N=10 educators) showed that identifying plans is a tedious process. To advance plan-based pedagogies in application-focused domains, we created an LLM-powered pipeline that automates the effortful parts of educators' plan identification process by providing use-case-driven code examples and candidate plans. In design workshops (N=7 educators), we identified design goals to maximize instructors' efficiency in plan identification by optimizing interaction with this LLM-generated content. Our resulting tool, PLAID, enables instructors to access a corpus of relevant programs to inspire plan identification, compare code snippets to assist plan refinement, and facilitates them in structuring code snippets into plans. We evaluated PLAID in a within-subjects user study (N=12 educators) and found that PLAID led to lower cognitive demand and increased productivity compared to the state-of-the-art. Educators found PLAID beneficial for generating instructional material. Our findings suggest that human-in-the-loop approaches hold promise for supporting plan-focused pedagogies at scale. This poster will highlight the key features of PLAID and design guidelines that are generalizable to other human-in-the-loop instructional tools.

“I think it’s pretty helpful, but it kind of does a lot more hurt than good”: How youth wrestle with generative AI’s possibilities and harms.

Charles Logan, Northwestern University

As teachers, school administrators, and policymakers debate how, if at all, to use generative artificial intelligence (GenAI) platforms like Open AI’s ChatGPT and Google’s Gemini in education, there is wide agreement that young people should learn about artificial intelligence (AI) and GenAI. One starting point for learning about GenAI is answering a fundamental question: is GenAI ethical? Determining if the technology is ethical is especially important for young people shaping a future made evermore tenuous by current political and climate crises. In my poster, I share findings from a six-week summer program for high school students that explored the ethics of AI in education, police surveillance, and social media. My study focuses on an hour-long discussion that features students confronting and contemplating AI’s often obscured ecologies, from the grueling labor of Kenyan data annotators to the environmental costs of GenAI. Drawing on a theoretical framework that positions youth as “philosophers of technology,” I examine what kinds of critical analyses of GenAI the youth constructed, and how conducting this sensemaking about the technology supported youth in describing hopeful and harmful relations between themselves, the technology, other people, and the planet. My findings suggest the importance of designing an expansive ethical terrain for young people to traverse when considering what place GenAI may–and may not–have in our collective present and future.

AI Unplugged: Engaging Young Learners in Ethical Reasoning and AI Simulation Through Tangible Play

Duri Long, Hasti Darabipourshiraz , Maalvika Bhat, Lily Ng, Grace Wang, and Sophie Rollins, Northwestern University

Middle school students increasingly encounter AI but often lack the tools to critically examine its influence. Our research explores how unplugged (i.e. low or no tech), tangible, and discussion-driven activities make AI concepts more accessible and engaging. In this poster, we present two studies of activities we have designed that teach AI without computers. The first study, AI Unplugged: Tangible Simulations of AI Reasoning Processes for Middle School Learners, examines how unplugged activities help students grasp AI reasoning. We designed and tested activities about supervised and unsupervised machine learning, convolutional neural networks, and knowledge representations. Conducted with 20 middle schoolers at the Griffin Museum of Science and Industry, our study found that tangible interaction and learner simulation foster embodied learning, making abstract AI concepts more concrete. The second study, Introducing AI Without Computers: Hands-On Literacy and Ethical Sense-Making for Young Learners, explores activities that support AI literacy and ethical reasoning. During a five-day summer workshop at North Carolina State University, exit ticket surveys and teacher feedback showed that personally relevant contexts and unplugged interactions improved conceptual understanding and collaborative reasoning. Role-play activities, especially those related to data collection and AI decision-making, were particularly effective.

Fostering AI Literacy in Museum Exhibits through Creativity and Embodiment

Duri Long, Grace Wang, Hasti Darabipourshiraz, Nyssa Shahdadpuri, Shannon Sauhee Han, Sophie Rollins, Yiling Bai, Northwestern University

Fostering middle schoolers’ understanding of artificial intelligence (AI) is imperative as they increasingly encounter AI technology in their daily lives. Creativity and embodiment have shown promise in fostering AI literacy and interest in informal learning spaces. In this study, we leverage both elements in the design prototypes of two museum exhibits—DataBites and Knowledge Net—aimed at middle schoolers. We conducted an in-museum evaluation through interviews, examining participants’ understanding of AI and interest development. Our findings suggest that the exhibits promote interest and facilitate the learning of certain AI concepts. In both exhibits, creativity engaged learners and contributed to their learning, but embodiment had a more mixed effect depending on the form of embodiment used in the exhibit. We recommend that AI museum exhibits for middle schoolers utilize creative and personally relevant activities to engage learners, support hybrid conceptualizations of AI, and leverage tangible interaction to make AI concepts approachable.

Generating the Quizbowl Curriculum

Jacob Puthipiroj, Northwestern University

We introduce a novel, data-driven approach to preparing for quizbowl, a fast-paced, buzzer-based competition that tests players' knowledge across a broad range of academic subjects. Historically, quizbowl players have trained using a combination of reviewing lists of frequently-occurring topics, reading source material, and reviewing flashcards. Instead, we scrape an online database of 300,000 quizbowl questions across 600 tournaments, and construct knowledge graphs to display the interconnections between topics. Additionally, our graphs feature machine learning algorithms to group similar question-answer texts together. By clicking on each node, users can see a rapidly distilled breakdown of pivotal clues to learn and remember. In revealing how high-yield topics overlap and cluster together on the macro-scale, this GPU-accelerated visualization will help beginners to grasp the structure of a new field of knowledge. Experienced players likewise benefit from the micro-scale patterns that reveal subtle gaps within and between nodes. Beyond quizbowl, our graphs allow for new avenues of research on old pedagogical questions. We can support adaptive scaffolding by dynamically adjusting traversal paths based on individual learner progress, thus providing personalized learning experiences. Analyzing these traversal patterns across groups of learners can yield further insights into effective strategies for knowledge representation and instructional sequencing."

Body Poses and Pose Detection: Computational Inquiry Towards Sport Sensemaking

Ashley Quiterio, Audrey Benitez Rojo, Allyson Lee, Vishesh Kumar, Marcelo Worsley, Northwestern University

The intersection of sports and computer science (CS) offers many opportunities for learning that can broaden participation within both disciplines. We consider this within an after school learning environment, where youth interact with Homecourt.AI, an artificial intelligence (AI) embedded, sports training app. The implementation occurred with elementary school youth in the winter of 2023, and it taught youth about the relationship between sports and CS. In this study, we explore how to analyze these interactions in ways that utilizes machine learning (ML) to better understand youth’s sensemaking for their sport performance and computational thinking skills. The goal of our study is to develop new methodologies between multimodal learning analytics (MmLA) and interaction analysis (IA) that support researcher’s sensemaking around video data. By generating joint coordinates, pose estimation, and ball detection from video data, we investigate youth’s body positions as an indicator for gaze to establish a quantifiable metric to evaluate how much time youth were focused on the app and their peers. We identified this feature as a potential estimator for the student’s relationship to sports and CS ideas and practices. For our poster, we will present current findings about ongoing MmLA work to demonstrate the capabilities of computational methods for supporting IA. We will demo the joint estimation along with HomeCourt.AI to show the connections between AI and sports education.

Using Generative AI to Scaffold the Creative Process in Music with Tinkerable Recommendations

Cameron L. Roberts, Kristin Fasiang, Eleanor O'Rourke, Michael S. Horn, Northwestern University

Current approaches to integrating generative AI into music learning are predominantly text-based, enabling users to create music without engaging deeply with the music-making process. To advance beyond this paradigm, this poster outlines the design principles of a co-creative AI tool and learning environment designed to support music learning by scaffolding the creative process. Our design centers on four key principles: maintaining learner agency in the creative process, enabling appropriation of musical material through tinkering, fostering open-ended exploration of multiple ideas, and ensuring coherence between AI outputs and learners’ prior ideas. We present an interactive demo of our tool implemented in MusicLOGO 2.0, a novel platform for creating music with computer programming. This tool generates multiple musical recommendations based on a learner’s existing work, presented as both playable audio and editable code. To illustrate our design principles in action, we describe a pilot study that tested our tool with 19 secondary school students over the course of a two-day workshop about creating music with computer code. Through a case study of one learner’s experience, we show the benefits of educational AI tools that are tailored to specific domains and that leverage a broad range of representations and modalities. We argue that this approach can support learning more effectively than a one-size-fits-all approach that uses natural language as the primary medium.

Spatial Language Analysis in the Era of AI: Comparing Methods of Old and New

Qingzhou Shi, Lauren Pagano, Max Chen, David H. Uttal, Northwestern University

Spatial thinking involves mentally representing and mentally transforming objects and the relations amongst those objects. It is critically important to survival and learning, particularly in the STEM domains. We have been studying the development of spatial thinking through analysis of children's language as they complete spatially rich tasks. One example is our work in the Chicago Children's Museum; children build model structures and record their comments and conversations with their parents. These conversations are spatially rich and provide valuable insights into children's spatial understanding. This method allows us to learn a great deal about spatial thinking in a rich, natural context that children find engaging. The analysis of children's spoken language, however, raises challenges that we attempt to address through AI's NLP capabilities. These challenges of human coding include the very slow and often unreliable identification of spatial themes in language. We have been exploring several LLMs, using transcriptions of parent-child conversations to both replicate prior work using human coding of language and extend this process to reveal new insights. We will present several approaches explored, including Bag of Words, LLM's zero-shot with chain-of-thought, and MultiAgent Systems. We look forward to sharing our work, receiving valuable feedback, and establishing future collaborations with researchers interested in spatial cognition and computational linguistics.

Designing for Youth Voice and Critique through Arts-based AI Explorations

Andy Stoiber, University of Wisconsin-Madison, Paulo Blikstein, Columbia University, Leah Rosenbaum, Columbia University, & Erica Halverson, University of Wisconsin-Madison

While AI is typically positioned within science and technology, story-telling and artistic traditions have historically offered more timely and humanistic means to express the puzzlement, critique, anxiety, and awe of such transformational innovations. This poster showcases an art-science approach that supports children and their families in learning about the largely unfamiliar fundamentals of AI. This project centers on week-long art-science camps in partnership with a local children’s museum. The camp will pair creative writing and artmaking with computer vision, large language models, generative AI, and the role of data in AI systems. Physical computing platforms will act as embodied AI agents, materials for creative expression, and partners in artistic performance. Youth will draw on their existing funds of knowledge to imagine and explore speculative futures with AI, using it as a thought partner and creative collaborator. Structured, hands-on, art-science activities will support learning fundamentals of AI to support participants to not only learn about technical aspects, but its inherent ethical quandaries, while coming to understand and envision AI futures. Throughout the program, participants design and iterate technological contraptions inspired by their speculative storytelling, leveraging the arts to create a final contraption or art piece embodying their AI understandings, hopes, and fears–which are, in turn, showcased to the museum public.

Multi-Modal AI for Reflective and Affective Programming Support

Caryn Tran, Eleanor O’Rourke, Northwestern University

Programming requires not only technical skill but also emotional monitoring and self-regulation. Yet in computing education, few environments effectively support students’ metacognitive or emotional processes while they work. A review by Loksa et al. (2022) highlights that key affective, motivational, and social regulation theories from psychology remain underexplored in CS education. Findings from Li et al. (2024) further emphasize the importance of addressing learners’ negative emotions and metacognitive knowledge during programming to prevent disengagement. Recent advances in AI, including in emotion recognition, large language models, and speech recognition, now make it possible to scaffold these processes more naturally and responsively. Rather than offering task-based hints, AI systems can engage learners as reflective dialogue partners (Paludo & Montresor, 2024), supporting both cognitive and emotional regulation. This poster proposes the design of a multi-modal AI tool to support metacognition during programming. We present a prototype that uses simple IDE event detection and facial emotion recognition to identify key moments when reflection may be valuable. The system prompts learners through lightweight, voice-based conversation that elicits their thinking and emotional states, focusing on fostering reflection and emotional regulation without analyzing their code.

ScratchTutorialMaker: LLM-Driven Tutorial Generation for Scratch

Jiayi Wang, Northwestern University

Scratch (scratch.mit.edu) is a block-based programming platform designed for beginners and hosts millions of community-created projects. These projects serve as a rich resource for learning, allowing learners to read code and remix projects they find interesting. However, for novice coders, especially those learning independently, this process can feel intimidating and difficult to begin. ScratchTutorialMaker is a system that leverages large language models (LLMs) and programmatic analysis to automatically generate step-by-step tutorials from existing Scratch (.sb3) projects. By analyzing the structure and behavior of a project, such as its scenes, sprite interactions, event triggers, and game mechanics, our system decomposes the code into functional milestones that novice coders can follow. LLMs are used not only for analysis but also to draft age-appropriate explanations and scaffolded tutorial steps for milestones. The result is a system that transforms complex community projects into accessible learning experiences, helping learners understand how a project works and how to recreate it themselves.

“Now There Really Isn’t an Excuse”: Exploring LLM-Supported Approaches to Empower K-12 Teachers in Culturally Relevant Pedagogy

Jiayi Wang, Northwestern University, Ruiwei Xiao, CMU, Xinying Hou, University of Michigan

Culturally Relevant Pedagogy (CRP) is vital for equitable K-12 education, yet teachers struggle with implementation due to time, training, and resource gaps. This study explores how Large Language Models (LLMs) can address these barriers by introducing ATLAS, an LLM tool that assists teachers in adapting AI literacy curricula to students’ cultural contexts. Through interviews and lesson adaptation tasks with four K-12 teachers, we examined ATLAS’s impact on CRP integration. Results showed ATLAS enhanced teachers’ confidence (post-survey: unanimous 5/5), streamlined demographic integration, improved efficiency, and provided actionable feedback. This work highlights LLM’s role in bridging CRP and AI literacy, emphasizing teacher-AI collaboration for equitable education.

AI-Enabled Learning with Data Infographics: Modeling Learners’ Attention and Comprehension through Eye-Movement Heatmaps

Kristine Zlatkovic, Northwestern University

This study investigates how learners process data infographics with seductive details, focusing on individual differences in working memory and repeated exposure to learning tasks. Seventy undergraduates completed sequential tasks using COVID-19 bar graphs with task-relevant, irrelevant, or no embellishments. Comprehension was measured by task completion time; attention was analyzed using eye-tracking. We created heatmaps by applying kernel density estimation to gaze coordinates, saved them as images, and clustered them using unsupervised machine learning to reveal patterns of visual attention. Mixed-effects regression and multinomial logistic models examined how working memory and task repetition predicted comprehension and attentional distribution. Findings suggest working memory influences how learners engage with infographics. Those with stronger visuospatial memory focused more on relevant text when seductive details were helpful, while irrelevant details distracted learners with lower memory. With repeated tasks, learners became more efficient—shifting attention from supportive text to key data, indicating growing infographic literacy. We propose an AI-enabled system that uses real-time eye-movement data to personalize content—blurring distractions and magnifying essential information. This approach supports learners with attentional challenges, working memory differences, or limited data visualization experience, enabling more inclusive learning.

Back to top