at Anthropic
Location
San Francisco, CA | New York City, NY | Washington, DC
Compensation
$310k–$375k USD
Type
full time
Posted
Today
Remote
Yes
Market range · function + seniority
p25 · target · p75 · n=84
Posted $375k · well above market
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Safeguards team is responsible for enforcing our policies, protecting users, and ensuring our platform is not misused. As the Incident Response Manager, you'll own the operational backbone of how Safeguards responds when things need attention fast. You'll run our on-call program, drive the automation that lets a small team cover a growing surface area, and manage the sensitive cross-functional escalations that cut across Policy, Legal, Safeguards, Product, and Comms.
This role calls for someone who has managed scaled escalations and can build durable processes that hold up in fast-moving moments. It requires discernment, close coordination with cross-functional partners during high-stakes situations, and the ability to translate learnings from incidents into improvements to our safety systems.
On-call program ownership
Own the Enforcement On-Call program end-to-end: rotations, coverage models, and escalation paths
Establish and maintain on-call documentation, runbooks, and SOPs so that anyone stepping into the rotation has what they need to act quickly and consistently
Triage tooling issues affecting on-call operations and drive them to resolution with engineering partners
Report out regularly on inbound volume, response metrics, trend lines, and staffing needs to inform long-term investment
Cross-functional escalations
Serve as the primary point of contact for sensitive enforcement escalations that require cross-functional coordination across Policy, Legal, Safeguards, Product, and Comms
Absorb information from multiple inputs quickly, frame the decision cleanly, and communicate clear updates, recommendations, and trade-offs to both operational and executive audiences
Drive processes that keep teams aligned and moving during fast-changing situations, including sensitive investigations, product mitigations, and coordinated responses
Manage escalation pathways to law enforcement and NCMEC based on established referral criteria, and track referral volume and trends over time
Automation & program maturity
Identify the right repetitive operational work in the on-call and escalations pipeline and partner with engineering to automate it
Expand and mature our rapid response toolkit through hands-on investment in new tools, playbook development, and team adoption
Build dashboards and reporting that give the team and leadership a clear view of enforcement operational health
Continuously improve documentation, quality, and consistency as the program scales
Background in trust and safety operations, incident response, escalations management, program management, or a closely related operational role at a technology company
Experience leading programs with meaningful cross-functional surface area, and comfort being the person others look to during an escalation
Ability to absorb a lot of information quickly and communicate clearly and concisely in writing and verbally during high-stakes moments
Experience coordinating with law enforcement, regulatory bodies, or other external compliance stakeholders
Comfort making judgment calls with incomplete information and knowing when to escalate
Ability to manage multiple concurrent workstreams without losing track of details
Availability to support team on-call and weekend coverage when assigned
Experience building or materially improving automation in an operations context (routing, triage, auto-assignment, ticket enrichment)
Familiarity with regulatory reporting obligations in trust and safety (NCMEC, EU DSA, or similar frameworks)
Proficiency with data tools (SQL, dashboards, spreadsheets) sufficient to build and maintain reporting workflows
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Open postings ranked by description similarity — useful if this role isn't quite right.