Prof. James Okafor
PJ

Prof. James Okafor

Professor of Molecular Biology

Department of Biochemistry, University of Oxford

CRISPR Gene Editing
Epigenetics
Stem Cell Biology
Genome Architecture
Chromatin Remodeling

About

I am a Professor in the Department of Biochemistry at the University of Oxford, where I lead the Genome Engineering & Epigenetics Lab.

Our research focuses on understanding how genomes are organized and regulated in stem cells, and developing novel CRISPR-based tools for precise genome editing. We combine experimental approaches with computational genomics to study the interplay between chromatin structure, transcription factor binding, and gene expression.

I am a Fellow of the Royal Society of Biology and recipient of the 2022 Francis Crick Medal.

Education

PhD in Molecular Genetics

University of Cambridge, Genetics

2010 - 2014

Cambridge, UK

Thesis: Epigenetic Control of Lineage Commitment in Stem Cells

BSc (First Class) in Biochemistry

University of Lagos

2006 - 2010

Lagos, Nigeria

Experience

Professor in Computer Science

University of Oxford, Biochemistry

2021 - Present

Oxford, UK

Postdoctoral Fellow

Broad Institute of MIT and Harvard

2014 - 2018

Cambridge, MA

Publications

3,450
Citations
42
h-index
58
i10-index

Featured

Chromatin Accessibility Maps Reveal Cell-Type-Specific Regulatory Logic

Lin Zhou, James Okafor, Sarah Davies

Cell2024
Journal Article
67

Base Editing Efficiency in Hematopoietic Stem and Progenitor Cells

James Okafor, Maria Santos

Science2023
Journal Article
256
2023

Single-Cell Multi-Omics Reveals Epigenetic Heterogeneity in AML

Emma Richardson, James Okafor, Ahmed Hassan

Nat Genet2023
Journal Article
89
2022

The Expanding CRISPR Toolbox: From Gene Knockout to Epigenome Editing

James Okafor, Lin Zhou

Annu Rev Biochem2022
Journal Article
430

Grants & Funding

Active Grants

Foundations of Faithful Reasoning in Language Models

National Science Foundation (NSF)
PI
$750,0002023–2026

Developing training methods and evaluation frameworks for improving logical consistency in large language models.

Human-Aligned NLP Systems

DARPA
Co-PI
$1,200,0002022–2025

Multi-institution project on building NLP systems that align with human values and intentions.

Awards & Honors

MIT Technology Review Innovators Under 35

MIT Technology Review2023

Recognized for pioneering work on faithful reasoning in AI systems.

Best Paper Award

NeurIPS 20242024

NSF CAREER Award

National Science Foundation2022

Early-career faculty award for research on interpretable language models.

Lab Members

Current Members

LZ
Lin Zhou
Postdoctoral Researcher

CRISPR delivery systems and editing efficiency

ER
Emma Richardson
PhD Student

Chromatin dynamics in hematopoietic stem cells

AH
Ahmed Hassan
PhD Student

Single-cell multi-omics in cancer

MS
Maria Santos
Research Assistant

Base editing protocols

Open Positions

PhD Student in LLM Reasoning
PhD Student

We are looking for 2 PhD students interested in improving reasoning capabilities of large language models. Strong background in NLP or ML required.

Requirements

MSc or equivalent in CS/ML/NLP. Strong programming skills in Python/PyTorch.

Apply by December 15, 2025
Postdoctoral Researcher — AI Alignment
Postdoc

Postdoc position on our DARPA-funded project on human-aligned NLP systems.

Requirements

PhD in NLP, ML, or related field. Publications in top venues.

Apply by June 30, 2025

Courses

Current Courses

BCH4001Advanced Molecular Biology
Fall 2024
Current

Graduate-level course covering modern genome engineering, epigenetic regulation, and single-cell genomics.

Past Courses

BCH2010Genetics and Genomics
Spring 2024

Undergraduate introduction to genetics, genome structure, and gene expression.

Announcements

Pinned
award
NeurIPS

NeurIPS 2024 Oral Presentation

Sep 15, 2024

Excited to share that our paper 'Scaling Faithful Reasoning in Large Language Models' has been accepted as an oral presentation at NeurIPS 2024!

Read more
recruiting

Looking for PhD Students — Fall 2025

Oct 1, 2024

I am recruiting 2 PhD students to start Fall 2025. Research areas: LLM reasoning, interpretability, and alignment. Please apply through the MIT EECS admissions portal.

Read more

Media & Press

MIT Technology Review

The Researchers Making AI Think More Clearly

Article
Jul 20, 2024

Feature article on our group's work on faithful reasoning in language models.

Lex Fridman Podcast

AI Alignment: Where Are We Now?

Podcast
Mar 15, 2024

Conversation about the current state of AI alignment research and practical approaches.

Frequently Asked Questions

Contact