DM

Dr. Maria Gonzalez

Assistant Professor of Economics

Department of Economics, Stanford University

Behavioral Economics
Decision Science
Public Policy Nudges
Economic Inequality
Experimental Methods

About

I am an Assistant Professor in the Department of Economics at Stanford University. My research combines experimental economics with insights from psychology to understand how people make decisions in complex, uncertain environments.

Current projects include studying the effectiveness of behavioral nudges in public policy, the psychology of wealth inequality, and how cognitive biases affect financial decisions across cultures.

I am a co-PI on the Stanford Behavioral Policy Lab and received the AEA Distinguished Young Economist Award in 2023.

Education

PhD in Behavioral Economics

University of Chicago, Economics

2014 - 2019

Chicago, IL

Thesis: Choice Architecture and Welfare in Developing Economies

Experience

Assistant Professor in Computer Science

Stanford University, Economics

2021 - Present

Stanford, CA

Postdoctoral Fellow

Harvard Kennedy School

2019 - 2021

Cambridge, MA

Publications

3,450
Citations
42
h-index
58
i10-index

Featured

Cognitive Load and Financial Decision-Making in Low-Income Households

Maria Gonzalez, Eldar Shafir

AER2024
Journal Article
51

The Behavioral Welfare State: How Framing Shapes Support for Redistribution

Maria Gonzalez

JPE2023
Journal Article
178

Grants & Funding

Active Grants

Foundations of Faithful Reasoning in Language Models

National Science Foundation (NSF)
PI
$750,0002023–2026

Developing training methods and evaluation frameworks for improving logical consistency in large language models.

Human-Aligned NLP Systems

DARPA
Co-PI
$1,200,0002022–2025

Multi-institution project on building NLP systems that align with human values and intentions.

Awards & Honors

MIT Technology Review Innovators Under 35

MIT Technology Review2023

Recognized for pioneering work on faithful reasoning in AI systems.

Best Paper Award

NeurIPS 20242024

NSF CAREER Award

National Science Foundation2022

Early-career faculty award for research on interpretable language models.

Lab Members

Current Members

DP
David Park
PhD Student

Default effects in health insurance enrollment

FA
Fatima Al-Rashid
PhD Student

Cross-cultural variation in loss aversion

Open Positions

PhD Student in LLM Reasoning
PhD Student

We are looking for 2 PhD students interested in improving reasoning capabilities of large language models. Strong background in NLP or ML required.

Requirements

MSc or equivalent in CS/ML/NLP. Strong programming skills in Python/PyTorch.

Apply by December 15, 2025
Postdoctoral Researcher — AI Alignment
Postdoc

Postdoc position on our DARPA-funded project on human-aligned NLP systems.

Requirements

PhD in NLP, ML, or related field. Publications in top venues.

Apply by June 30, 2025

Courses

Current Courses

ECON 280Behavioral Economics
Fall 2024
Current

How psychological insights reshape economic theory and policy. Covers bounded rationality, prospect theory, and nudge design.

Past Courses

ECON 345Experimental Methods in Economics
Spring 2024

Design and analysis of economic experiments, both laboratory and field.

Announcements

Pinned
award
NeurIPS

NeurIPS 2024 Oral Presentation

Sep 15, 2024

Excited to share that our paper 'Scaling Faithful Reasoning in Large Language Models' has been accepted as an oral presentation at NeurIPS 2024!

Read more
recruiting

Looking for PhD Students — Fall 2025

Oct 1, 2024

I am recruiting 2 PhD students to start Fall 2025. Research areas: LLM reasoning, interpretability, and alignment. Please apply through the MIT EECS admissions portal.

Read more

Media & Press

MIT Technology Review

The Researchers Making AI Think More Clearly

Article
Jul 20, 2024

Feature article on our group's work on faithful reasoning in language models.

Lex Fridman Podcast

AI Alignment: Where Are We Now?

Podcast
Mar 15, 2024

Conversation about the current state of AI alignment research and practical approaches.

Frequently Asked Questions

Contact