Nudging Retirement Savings: A Cross-Cultural Experiment in 12 Countries
Maria Gonzalez, Richard Thaler, Sendhil Mullainathan
I am an Assistant Professor in the Department of Economics at Stanford University. My research combines experimental economics with insights from psychology to understand how people make decisions in complex, uncertain environments.
Current projects include studying the effectiveness of behavioral nudges in public policy, the psychology of wealth inequality, and how cognitive biases affect financial decisions across cultures.
I am a co-PI on the Stanford Behavioral Policy Lab and received the AEA Distinguished Young Economist Award in 2023.
PhD in Behavioral Economics
University of Chicago, Economics
2014 - 2019
Chicago, IL
Thesis: Choice Architecture and Welfare in Developing Economies
Assistant Professor in Computer Science
Stanford University, Economics
2021 - Present
Stanford, CA
Postdoctoral Fellow
Harvard Kennedy School
2019 - 2021
Cambridge, MA
Maria Gonzalez, Richard Thaler, Sendhil Mullainathan
Maria Gonzalez, Eldar Shafir
Maria Gonzalez
Foundations of Faithful Reasoning in Language Models
Developing training methods and evaluation frameworks for improving logical consistency in large language models.
Human-Aligned NLP Systems
Multi-institution project on building NLP systems that align with human values and intentions.
MIT Technology Review Innovators Under 35
Recognized for pioneering work on faithful reasoning in AI systems.
Best Paper Award
NSF CAREER Award
Early-career faculty award for research on interpretable language models.
Default effects in health insurance enrollment
Cross-cultural variation in loss aversion
Postdoc position on our DARPA-funded project on human-aligned NLP systems.
Requirements
PhD in NLP, ML, or related field. Publications in top venues.
How psychological insights reshape economic theory and policy. Covers bounded rationality, prospect theory, and nudge design.
Design and analysis of economic experiments, both laboratory and field.
Excited to share that our paper 'Scaling Faithful Reasoning in Large Language Models' has been accepted as an oral presentation at NeurIPS 2024!
Read moreI am recruiting 2 PhD students to start Fall 2025. Research areas: LLM reasoning, interpretability, and alignment. Please apply through the MIT EECS admissions portal.
Read moreMIT Technology Review
The Researchers Making AI Think More Clearly
Feature article on our group's work on faithful reasoning in language models.
Lex Fridman Podcast
AI Alignment: Where Are We Now?
Conversation about the current state of AI alignment research and practical approaches.
Department of Economics
Stanford University