AI Researcher and Applied Scientist
I’m a Principal AI Scientist at Indeed and a Computer Science Ph.D. student at the University of Texas at Austin where I am co-advised by Greg Durrett and Eunsol Choi.
I am broadly interested in deep learning for natural language processing. My current research focuses on reasoning, consistency, factuality, and post-training in the context of neural language models.
Before joining Indeed, I was a Machine Learning Scientist at Amazon where I primarily worked on deep learning for ranking and recommender systems. I received my MS in Computer Science from Western Michigan University.
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
Thom Lake, Eunsol Choi, and Greg Durrett.
Pluralistic Alignment Workshop at NeurIPS 2024.
(preprint) Distilling Large Language Models using Skill-Occupation Graph Context for HR-Related Tasks
Pouya Pezeshkpour, Hayate Iso, Thom Lake, Nikita Bhutani, Estevam Hruschka.
arXiv 2023.
Flexible Job Classification with Zero-Shot Learning
Thom Lake.
Workshop on Recommender Systems for Human Resources at RecSys 2022.
(preprint) Large-scale Collaborative Filtering with Product Embeddings
Thom Lake, Sinead Williamson, Alexander Hawk, Christopher Johnson, Benjamin Wing.
arXiv 2019.
Proceedings of the First Workshop on Natural Language Processing for Human Resources (NLP4HR 2024)
Estevam Hruschka, Thom Lake, Naoki Otani, Tom Mitchell (Editors).
EACL Workshop 2024.
Analyzing Repetitive Sequences with Structured Dynamic Bayesian Networks
Thom Lake.
Thesis 2015.
Last updated: December 2024