Recognizing Underlying Concepts in Student Explanations
In this project, I’m exploring computational tools to understand student utterances. In learning, students come up with unique explanations for phenomena that they encounter. These explanations are constructed by piecing together a range of different and relevant concepts. Understanding how the underlying concepts change can give us a window to observe, in fine detail, whena and how learning happens. In this project, I explore the use of language model embeddings to computationally dissect student utterances in interviews where they attempt to explain “Why is it hotter in the summer and colder in the winter?”. I found that, while noisy, embeddings derived from language models (BERT, GPT, etc.) contain information relevant to underlying concepts (e.g. “The Earth spins.”) and can be teased out. More work needs to be done in understanding the structure of the embedding space and applying this methods to larger scale datasets.
