Cornell AI Alignment Club

New for Fall 2026: CS 1998 — Intro to AI Safety & Alignment

We are currently in the planning stage for a student-led Cornell CS 1998 course focused on AI safety and alignment.

The course is planned to cover foundational model training pipelines, mechanistic interpretability, RLHF and goal misgeneralization, safety evaluations and red teaming, scalable oversight and control, and policy and career pathways in AI safety.

The format will emphasize hands-on notebooks, live demos, and paper-driven discussion to help students build both conceptual understanding and practical skills.

Ways to get involved

CAIA offers multiple ways for students to learn, connect, and contribute to AI safety research.

Introduction to AI Alignment Fellowship

CAIA runs an 8-week introductory fellowship on AI safety, covering technical and policy topics including interpretability, learning from human feedback, US AI policy, and catastrophic risk from advanced systems.

The fellowship is open to undergraduate and graduate students. Students with ML experience are encouraged to apply, but no prior experience is required.

The program meets weekly in small groups with dinner provided and no additional required work outside meetings.

Topics include interpretability,1 learning from human feedback,2 US AI policy, and catastrophic risk from advanced systems.

Technical Paper Reading Group

CAIA runs a weekly open technical ML reading group led by experienced TAs.

Sessions cover recent significant papers in AI and ML safety, meet weekly in small groups, and provide dinner.

There is no additional required work outside meetings.

Student Research

CAIA supports original student research in AI safety.

Students interested in technical or policy research can reach out to be connected with resources and a faculty or upperclassman mentor.

Reach out at cornellaialignment@gmail.com to get connected with resources and mentors.