Skip to content

About us

Apart Research is an AI safety research organization focused on making advanced AI safe and beneficial. We run research sprints and hackathons that have engaged 6,000+ participants across 55 events in 200+ global locations, resulting in 22 peer-reviewed papers at top AI conferences.
We bring together engineers, researchers, and policy experts to build practical tools for AI safety and governance. Join us for hackathons, workshops, and local meetups.

AI Control Hackathon - Mox San Francisco

AI Control Hackathon - Mox San Francisco

1680 Mission, 1680 Mission St, First Floor Conference Room, San Francisco, ca, US

The Problem
AI systems are getting more capable, and we need practical ways to ensure they can't subvert the safety measures we put in place. AI control is about preventing bad outcomes even when an AI is intentionally trying to circumvent oversight. The foundational research exists, but we need more builders turning theory into working tools.

The Event
One weekend to build evaluation environments, control protocols, and red-teaming tools that make AI control practically enforceable. Work alongside other builders, attend talks from AI safety researchers at Redwood Research, UK AISI, and more, and compete for prizes.

Register and get talk invites + participate virtually: AI Control Hackathon

Attending in person in SF? RSVP for the Luma Event
(If you're coming to Mox, please register on both so you get all updates and talk links.)

Food provided throughout the event.

Speaker Schedule:
All talks streamed live to registered participants:
Thursday, Mar 12 at 8:00am PT - Tyler Tracy, Redwood Research (Co-organizer)
Thursday, Mar 19 at 2:00pm PT - Tyler Tracy, Redwood Research (Co-organizer)
Friday, Mar 20 at 10:00am PT - Aryan Bhatt, Redwood Research (Keynote)
Friday, Mar 20 - Rogan Inglis, UK AI Security Institute (ControlArena Overview)
More speakers to be announced.

Prizes & Opportunities:
- First place project receives a fully funded trips to ControlConf Berkeley 2026
- $2,000 prize pool across all tracks
- Invitation to the Apart Research Fellowship with continued development support
- Direct connections to AI control researchers

Who Should Join
Anyone who can build and thinks AI systems need robust safety mechanisms that work even under adversarial conditions. Engineers, ML researchers, security folks, red teamers. No AI Safety background needed.

Hosted by
Apart Research in partnership with Redwood Research.

1 attendee

Upcoming events

1

See all

Group links

Organizers

Members

9
See all