AI Researcher and Applied Scientist
I work on language-model systems for open-ended problems. My work brings together agentic systems, simulation, principled evaluation, and human–computer interaction. I’m currently an AI Technical Fellow at Indeed. Before joining Indeed, I was a Machine Learning Scientist at Amazon, where I worked on deep learning for ranking and recommender systems. I received an MS in Computer Science from Western Michigan University and was a member of the TAUR lab at UT Austin, led by Greg Durrett.
Research interests: agent runtimes, evaluation, interactive environments, recommender systems, and post-training for reasoning, coherence, and memory management
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
Thom Lake, Eunsol Choi, and Greg Durrett.
NAACL 2025.
ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models
Liyan Tang, Grace Kim, Xinyu Zhao, Thom Lake, Wenxuan Ding, Fangcong Yin, Prasann Singhal, Manya Wadhwa, Zeyu Leo Liu, Zayne Sprague, Ramya Namuduri, Bodun Hu, Juan Diego Rodriguez, Puyuan Peng, Greg Durrett.
NeurIPS 2025
(preprint) Distilling Large Language Models using Skill-Occupation Graph Context for HR-Related Tasks
Pouya Pezeshkpour, Hayate Iso, Thom Lake, Nikita Bhutani, Estevam Hruschka.
arXiv 2023.
Flexible Job Classification with Zero-Shot Learning
Thom Lake.
Workshop on Recommender Systems for Human Resources at RecSys 2022.
(preprint) Large-scale Collaborative Filtering with Product Embeddings
Thom Lake, Sinead Williamson, Alexander Hawk, Christopher Johnson, Benjamin Wing.
arXiv 2019.
Proceedings of the First Workshop on Natural Language Processing for Human Resources (NLP4HR 2024)
Estevam Hruschka, Thom Lake, Naoki Otani, Tom Mitchell (Editors).
EACL Workshop 2024.
Analyzing Repetitive Sequences with Structured Dynamic Bayesian Networks
Thom Lake.
Thesis 2015.
Last updated: April 2026