Apart Research is hiring
AI Infrastructure Engineer (with Events Experience)
Build the AI infrastructure that lets Apart's hackathons grow 5x by the end of 2026.
About Apart Research
Apart Research is an AI safety nonprofit building the global talent pipeline for the AI safety ecosystem. Our remote hackathons and fellowships have become a primary on-ramp into empirical AI safety. In just the last two years, we've engaged 7,000+ participants across 50+ global research Sprints (hackathons), and we have published 30+ peer-reviewed AI safety papers, including two oral spotlights at ICLR 2025 (top 1.8% of accepted papers), through our fellowships.
Our alumni now work at Anthropic, Google DeepMind, the UK AI Security Institute, and dozens of other labs, government bodies, and AI safety orgs.
We're at an inflection point. Our hackathons currently draw around 400 signups each, and we want to 5x that by the end of 2026 while launching more events better and faster. Achieving this requires investing in the engineering and operational backbone needed to run more and larger events at higher quality.
We are looking for an AI engineer excited to make high-impact events happen (ideally with real events experience), building the infrastructure and tooling that make the manual work disappear.
The Role
You'll be defining how we build it for the next several years rather than filling a slot in someone else's plan. You'll work alongside the CEO and the Operations Coordinator.
This isn't a Program Manager role with engineering on the side, and it isn't a software engineering role with events on the side. It's a founding-engineer-grade build of Apart's events function, done by someone who has shipped both software and events before.
You will:
- Build the AI tooling that lets us scale our programs with the same ops headcount. Write and maintain software to eliminate team bottlenecks and accelerate our programs.
- Create dashboards providing real-time visibility into progress, results, and program status, while upgrading our current stack (AWS, Notion, Discord, Framer, Google Workspace and Claude Code).
- Automate repetitive coordination work including outreach, communications and other operations.
- Collaborate with Apart, and external, researchers to understand talent sources, participant bottlenecks and data flows before designing solutions.
- Lift the quality bar of every event we run. Ideally drawing on your direct events experience.
- Apply agile methodologies to accelerate delivery and improve quality, not just through standard Apart Sprints (hackathons), but by rapidly piloting new programs within frameworks that allow us to assess impact and quickly decide whether to pivot or scale.
- Set the technical direction for program ops at Apart for the next years.
What You'll Own
- Engineering and AI tooling (~40%): building and maintaining AI automations, tools, and dashboards that replace recurring manual work; having a voice on what to build vs. buy vs. integrate; setting the technical direction for event ops.
- Event quality and operations (~25%): applying your experience to lift Apart's bar across formats; identifying what makes the difference between a forgettable AI Safety program and a career-altering one.
- Reliable delivery of the operational surface area (~25%): e.g. partner comms, judge coordination, scoring infrastructure, registration flows, post-event follow-up, reminder systems.
- Engineering and ops culture (~10%): bringing software-team rigor (fast iteration loops, code review, retros, postmortems) to a fast-moving organisation.
Who You Are
Required
- You don't just coordinate engineering work; you do it. The bar for this role is shipping yourself, not writing the spec for a contractor who ships it. Ideally you bring experience with AWS, Notion and n8n.
- You've shipped in real software teams using agile methodologies (e.g. sprints, code review, CI/CD, retros, ticketing). You bring industry rigor.
- You're a software engineer first. You bring software engineering experience. You learned to code without LLMs as your primary tool, and you've since thoughtfully integrated them heavily in existing workflows. Ideally in workflows from other organizations as well. You're very familiar with agentic coding frameworks.
- You've have events experience. Or are highly motivated to learn. Ideally you have direct experience managing (ideally scaling) events, programs, hackathons, fellowships, conferences, accelerators, or similar, taking them from small to substantial, from rough to polished, from one-off to repeatable. You can describe specific events, scale, and outcome metrics.
- You think before you build. You ask why before shipping. You push back when priorities seem wrong. You optimize for outcomes, not output.
- Strong written communication and high attention to detail. Your emails read well, your docs are clear, deadlines hit, mistakes get caught before they ship. You apply professional rigor and a sharp eye for slop.
Nice to have
- AI safety context: you've followed the field, attended events, read the literature.
- Experience at known event-running organisations (MLH, Devpost, accelerators, fellowships, university hackathon programs, conference orgs, etc.).
- You've built tooling specifically for events (judging systems, participant comm tools, etc.).
- Track record of being the person on a team who quietly raises everyone else's bar.
You're encouraged to apply even if you don't believe you meet every listed qualification.
You might be
The clean version of this person doesn't really exist as a job title in the world. Some plausible archetypes:
- A mid-senior devops/AI engineer at an AI org who's quietly been the one running their internal hackathon or onboarding bootcamp on top of your day job, and want to make events ops infra your actual job.
- A TPM or engineer at a frontier lab who's been the hidden ops backbone of programs your team ran, and want to do that work at an org where it's the headline role.
- A founder of a small EA / AI safety community who ships real code (not just specs), and want to scale somebody else's mission rather than start your own.
- A hackathon-circuit organizer (MLH, Devpost, HackMIT, university programs) who picked up serious engineering chops along the way, and wants to apply both at an org that runs hackathons as a strategic lever, not a marketing line.
- A DevRel engineer or developer advocate at an AI/dev-tools company who's run workshops, hackathons, and technical events as part of the job, ships real code alongside the community work, and wants to make events infrastructure the main event rather than a side project.
You're strongly encouraged to apply even if your background doesn't fit any of these neatly. Speculative applications welcome.
What we offer
- High-leverage work. The infrastructure you build directly shapes how many AI safety researchers come through our programs over the next months and years. You will be a multiplier of multipliers.
- Compensation: $70-100k USD/year, with flexibility for exceptional candidates. Our band is calibrated to AI safety remote nonprofit norms but we are open to negotiation for exceptional candidates.
- Flexible working arrangements.
- Compute, conference, and retreat budget: travel support to EAGs, ML conferences, and team retreats.
- Equipment and productivity stipend: monthly expenses card to upgrade your home setup, tools, and workflows.
- Flat hierarchy, low bureaucracy. Small, mission-focused team.
Logistics
- Full-time, fully remote.
- EU/UK timezone preferred (overlap with CET working hours).
- Reports to: CEO.
- Start date: As soon as possible.
- Hiring process:
- Form
- Work Task
- Intro call
- Paid 1-day work-sample exercise
- Reference checks
- Offer
Diversity and inclusion
We're committed to building a diverse and inclusive team. We particularly encourage applications from women, people of colour, and neurodivergent applicants. Reasonable adjustments are available at any stage of the process. Contact careers@apartresearch.com if you need any.
Apply
This form should take about 15 minutes. Keep your responses short. Bullet points are fine. Please don't use AI.