Yunsung Kim
Google Scholar /
Email /
GitHub /
Twitter /
LinkedIn
I am a Postdoctoral Scholar at the Stanford Graduate School of Education sponsored by Prof. Candace Thille. I received a PhD in Computer Science from Stanford University in 2024, where I was advised by Prof. Chris Piech.
My research aims to build computational tools that can help instructors understand and aid their students more efficiently and effectively. I often use ideas from probabilistic modeling, machine learning, and artificial intelligence.
I was a Machine Learning Scientist Intern (Summer 2022) in the Learning Sciences Organization at Amazon, where I was supervised by Prof. Candace Thille. Before coming to Stanford, I finished my undergraduate studies at Columbia University, where I was fortunate to have worked with Prof. Augustin Chaintreau and Prof. Martha Kim.
|
|
One of my greatest sources of inspiration for my research is my love for teaching! Here are the courses which I've been involved with:
|
Yunsung Kim*, Jadon Geathers* (Equal Contribution), Chris Piech
EDM '24: Proceedings of the 17th International Conference on Educational Data Mining, 2024
paper /
code /
StochasticGrade is an open-source framework for auto-grading programs that produce probabilistic output. It offers an exponential speedup over the standard two-sample hypothesis testing baseline on identifying incorrect programs and allows the rate of misgrades to be adjusted. Moreover, the “disparity measurements” calculated by StochasticGrade can be used for fast and accurate clustering of student programs by error type. We demonstrate the accuracy and efficiency of StochasticGrade using student data collected from 4 assignments in an introductory programming course.
|
|
Yunsung kim, Chris Piech
L@S '23: Proceedings of the Tenth ACM Conference on Learning @ Scale, 2023
paper /
code /
website /
High-Resolution Course Feedback (HRCF) is a tool for minimizing the delay between when an issue comes up in a course and when the instructors get feedback about it. It requests feedback from a small random subset of the students each week, but surveys each student exactly twice per term. We demonstrate that without extra effort compared to mid/end-of-term surveys, HRCF can provide constructive and actionable feedback early on.
|
|
Yunsung Kim, Sree Sankaranarayanan, Chris Piech, Candace Thille
EDM '23: Proceedings of the 16th International Conference on Educational Data Mining, 2023
paper /
code /
A new algorithm for inferring dynamic learner proficiency when learning occurs alongside assessment. While retaining high inference quality, VTIRT is 28 times faster than the fastest existing inference algorithm and provides interpretable inference by its modular algorithm.
|
|
Yunsung Kim, Chris Piech
LAK '23: 13th International Learning Analytics and Knowledge Conference, 2023
paper (NB: Some figures are not properly displayed in Chrome) /
code /
Are there structures underlying student work that are universal across every open-ended task? We demonstrate that, across many subjects and assignment types, the probability distribution underlying student-generated open-ended work is close to Zipf’s Law. We discuss how inferring this latent structure can help classrooms and develop an inference algorithm.
|
|