At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world.
Purpose:
We are seeking a strategic and results-oriented Senior Director to lead the Digital Legal Office’s (DLO) AI risk management program and serve as the DLO’s enterprise coordination point with the Cyber team on security risks that intersect with AI and privacy. This leader will join our data governance, privacy, cybersecurity, and artificial intelligence team (the “Digital Legal Office”) within the Legal department and will be accountable for scaling the DLO’s existing GRC program to fully encompass AI risk disciplines —defining the AI risk taxonomy, control library, and domain-specific content that integrates into the team’s established governance processes, risk management lifecycle, and ServiceNow tooling—while ensuring the DLO’s risk frameworks incorporate appropriate oversight of relevant cybersecurity controls through structured coordination with the Cyber team.
This role requires technical fluency in security concepts to effectively collaborate closely with Cybersecurity GRC functions and ensure security control attestations, threat intelligence, and cyber risk outputs are properly reflected in the DLO’s risk posture.
The ideal candidate is a tried program leader with hands-on experience standing up, maturing, and scaling AI risk management programs within supervised enterprises. They will be responsible for the AI risk management lifecycle end-to-end—from risk identification, measurement, and supervising through control design, policy implementation, and executive reporting—and will build the multi-functional relationships and enterprise influence needed to sustain the program at scale.
Strong candidates will bring complementary privacy risk management experience and solid understanding of cybersecurity risk frameworks (e.g., NIST CSF) to ensure a cohesive, cross-domain approach across the DLO’s areas of oversight. They will influence senior leadership, represent Lilly’s AI governance posture externally, and model Team Lilly behaviors—Include, Innovate, Accelerate, Deliver—in every interaction.
Responsibilities:
Strategic Leadership & Governance:
- Lead the strategic direction for AI risk management within the DLO’s GRC program, defining the multi-year roadmap, maturity targets, and investment priorities needed to keep pace with Lilly’s expanding AI portfolio and evolving regulatory landscape.
- Scale the DLO’s existing GRC processes and governance model—including the DLO Risk Board, risk management lifecycle, and control framework—to fully encompass AI risk disciplines and integrate coordinated oversight of cybersecurity risk dependencies, ensuring alignment with Lilly’s enterprise risk appetite.
- Present the DLO’s AI and digital risk posture to senior leadership and executive collaborators on a recurring cadence, translating sophisticated risk landscapes into actionable insights that advise enterprise decision-making.
- Represent the DLO and Lilly’s AI governance posture in external forums, industry working groups, and regulatory engagements, building Lilly’s reputation as a leader in responsible AI governance.
- Champion a culture of responsible AI across the enterprise and develop multi-functional collaboration and constructive challenge.
AI Policy Development & Governance:
- Drive the creation, adoption, and continuous improvement of Lilly’s AI governance policies and standards, grounded in the NIST AI RMF Govern function and aligned with the EU AI Act’s risk-classification requirements and other relevant laws and regulations.
- Update and lead the enterprise rollout of AI-specific policies covering model risk, algorithmic fairness, clarity, and accountability—ensuring they are operationalized across business units, not just documented.
- Extend the DLO’s existing GRC framework to address the full AI lifecycle (design, development, deployment, monitoring, decommission), defining or updating the AI-specific policies, controls, and governance requirements that complement the established privacy and data governance program.
- Supervise and analyze emerging AI regulations, enforcement actions, and industry standards (e.g., NIST AI RMF updates, EU AI Act implementing guidance, ISO/IEC 42001, OECD AI Principles) to proactively update policies and frameworks.
- Translate evolving AI risk requirements into actionable procedures, job aids, and decision-support tools that business and technical teams can operationalize.
- Lead cross-functional collaboration—including with the Cyber GRC team—to embed AI risk controls into model development pipelines and technology solutions, ensuring security control requirements are addressed upstream.
AI Risk Management:
- Serve as the enterprise subject-matter authority on AI risk management, applying an AI RMF to identify, assess, and treat AI-specific risks—including bias/fairness, explainability, robustness, security, and privacy harms.
- Define the AI risk taxonomy, identify AI-specific risks and controls, and scale the DLO’s existing risk assessment methodology to encompass AI systems—including Algorithmic Impact Assessments, AI-specific risk scoring criteria, and gap analyses aligned to an AI RMF (e.g. NIST AI RMF, ISO/IEC 42001). Direct corrective actions and hold control owners accountable for remediation.
- Lead the definition, operationalization, and ensure reporting on AI-specific KPIs and KRIs (e.g., model drift rates, fairness metrics, incident response times) to surface emerging risks and advise senior leadership decisions.
- Extend the DLO’s existing risk registry and issues log to include AI-specific risks, controls, and their relationships—ensuring integration with the Integrated Risk Management program and appropriate cross-referencing of cybersecurity risk outputs provided by the Cyber GRC team.
- Establish monitoring and mechanisms for AI system performance, compliance, and control effectiveness throughout the model lifecycle. Prepare and present regular risk reports to senior management and executive collaborators.
- Drive the consolidation and continuous improvement of AI and privacy risk management practices, simplifying processes and advancing program maturity over time.
Cybersecurity Risk Coordination:
- Serve as the DLO’s primary coordination point with the Cyber GRC team, establishing and maintaining a structured engagement model (e.g., recurring joint reviews, shared risk issue protocols, coordinated reporting cadences) to ensure alignment without duplicating cybersecurity GRC responsibilities.
- Maintain sufficient technical fluency in cybersecurity frameworks (NIST CSF, ISO 27001, CIS Controls) and security control domains (access management, data protection, vulnerability management, incident response) to translate Cyber GRC outputs into the DLO’s AI and privacy risk language.
- Ensure the DLO’s AI and privacy risk management frameworks incorporate appropriate oversight of security controls that underpin AI system integrity, data confidentiality, and model pipeline security—without owning or delivering those controls directly.
- Partner with the Cyber GRC team to identify and assess security threats unique to AI systems (e.g., adversarial attacks, model extraction, training data poisoning, prompt injection) and ensure these are reflected in the DLO’s risk registers and assessment processes.
- Coordinate with Cyber GRC on shared control attestation and evidence-gathering activities, ensuring the DLO can demonstrate that security dependencies within AI and privacy programs are monitored and validated.
- Serve as the DLO’s representative in multi-functional incident response and risk customer concern processes for AI-related security events, ensuring privacy and AI governance considerations are addressed alongside security remediation.
AI Regulatory Compliance
- Maintain deep, current expertise in global AI governance regulations, standards, and best practices—including the NIST AI RMF, EU AI Act, ISO/IEC 42001, OECD AI Principles, and sector-specific AI guidance (e.g., FDA AI/ML in pharma).
- Be responsible for the organization’s compliance with AI-related laws and standards, ensuring effective implementation, monitoring, and reporting of AI-specific controls across all business units.
- Build and maintain the AI-specific control library—identifying the unique AI risks, controls, and regulatory obligations that must be integrated into the DLO’s existing control framework—and mapping them to the AI RMF functions and relevant cybersecurity control dependencies.
- Maintain complementary solid understanding of privacy and data governance practices (GDPR, CPRA, HIPAA, NIST Privacy Framework) and their intersection with AI risk and cybersecurity requirements.
- Be responsible for audit and compliance documentation for AI governance, directing coordination with internal and external auditors—including managing shared audit responses with Cyber GRC where AI and security controls intersect.
- Lead AI-specific education and awareness activities across the enterprise, including responsible-AI literacy programs and executive briefings.
Technology:
- Leverage technology to drive efficiencies and improve effectiveness of AI governance and GRC processes at enterprise scale.
- Align the DLO’s AI risk posture with the overall company risk tolerance within the GRC tool (e.g., ServiceNow IRM, AI Control Tower), ensuring awareness of cybersecurity risk dependencies where applicable.
- Scale the DLO’s ServiceNow IRM footprint to include AI risk entities, controls, assessments, and workflows within the existing architecture—including cross-domain linkages to Cyber GRC risk data and integration with the established privacy risk management modules.
- Leverage technology, including AI-powered tools, to automate and streamline AI risk monitoring, reporting, and program controls.
Cross-Functional Influence & Capability Development:
- Influence and align multi-functional stakeholders—across the DLO, Cyber GRC, IT, Legal, and business units—on AI risk strategy, priorities, and governance decisions without direct authority, demonstrating subject-matter credibility and relationship-building.
- Serve as a coach and mentor to DLO colleagues and multi-functional partners on AI risk management disciplines, elevating the organization’s collective capability and fluency in AI governance practices.
- Develop a culture of responsible AI and continuous learning across the DLO and broader enterprise
Basic Qualifications
- Bachelor’s degree in a discipline related to risk management, information systems/computer science, data science, AI/machine learning, cybersecurity, information management, or a related field.
- 10+ years of progressive experience in governance, risk, and compliance, with at least 2 years directly focused on AI risk management (e.g., operationalizing an AI risk framework, standing up AI governance programs, conducting algorithmic impact assessments) and demonstrated experience leading or building GRC capabilities.
- 5+ years of experience leading or contributing to cybersecurity, data privacy, or compliance/quality efforts—with clear examples of how that experience has been applied to managing AI-specific risks and coordinating across security and privacy risk domains.
- Qualified applicants must be authorized to work in the United States on a full-time basis. Lilly will not provide support for or sponsor work authorization or visas for this role, including but not limited to F-1 CPT, F-1 OPT, F-1 STEM OPT, J-1, H-1B, TN, O-1, E-3, H-1B1, or L-1.status) for this employment position.
Additional Skills/Preferences
- Demonstrated experience influencing senior leadership and executive participants on risk strategy, program investment, and governance decisions.
- Demonstrated ability to influence and drive outcomes across organizational boundaries without direct authority—including experience coaching or mentoring colleagues on GRC, risk management, or compliance disciplines.
- Demonstrated, hands-on experience applying the NIST AI Risk Management Framework (AI RMF) or an equivalent structured AI risk framework (e.g., ISO/IEC 42001) in an enterprise setting—not simply awareness of the framework, but evidence of practical implementation and program scaling.
- Working knowledge of AI-related laws, regulations, and standards including the EU AI Act, U.S. state AI legislation, NIST AI RMF, ISO/IEC 42001, OECD AI Principles, and their intersection with privacy frameworks (GDPR, CPRA, HIPAA, NIST Privacy Framework) and cybersecurity frameworks (NIST CSF, ISO 27001).
- IAPP AI Governance Professional (AIGP) or ISACA Advanced in AI Risk (AAIR) certification strongly preferred; AAIA, CIPP, CIPM, CIPT, CRISC, CDPSE, CISSP, or CISM certifications valued as complementary credentials.
- Experience with AI-specific risk assessment methodologies: Algorithmic Impact Assessments, model risk management, AI red-teaming, bias/fairness testing frameworks.
- Direct experience with the NIST AI RMF Playbook and its suggested actions across Govern, Map, Measure, and Manage functions.
- Familiarity with AI/ML model lifecycle management and the technical concepts underlying AI risk (e.g., model drift, explainability techniques, training data governance, prompt injection risks, adversarial attacks, model extraction threats).
- Experience with ServiceNow IRM or similar GRC platforms for AI risk tracking, reporting, and cross-domain risk integration.
- Proficiency in developing and tracking AI-specific metrics and KPIs (fairness metrics, model performance indicators, AI incident rates).
- Proficiency in PIA/DPIA methodologies with experience extending these to AI impact assessments; privacy-by-design and responsible-AI-by-design experience.
- Working knowledge of cybersecurity control domains (access management, data protection, encryption, vulnerability management, incident response) sufficient to coordinate effectively with a dedicated Cyber GRC team—not expected to be a cybersecurity practitioner, but able to understand and translate security risk outputs.
- Experience establishing and operating multi-functional risk coordination models between separate GRC functions (e.g., privacy and security, AI and cyber) within a matrixed organization.
- Experience in a regulated industry (pharmaceutical, healthcare, financial services) where AI governance intersects with sector-specific compliance and cybersecurity requirements.
- Track record of leading enterprise-scale programs and driving multi-functional alignment across matrixed organizations.
- Demonstrated ability to think and act strategically while translating complex AI and security risk concepts for non-technical and executive participants.
- Organization Change Management education and/or certification.
- Experience as an IT/Security/Privacy/AI auditor, particularly with AI system audits or cross-domain audit coordination.
- Strong executive communication, presentation, and interpersonal skills—with demonstrated ability to build productive relationships across organizational boundaries and influence at all levels.
- Ability to work independently and collaboratively in a fast-paced, high-growth environment.
- High attention to detail and accuracy.
Additional Information
Role located in Indianapolis, IN with a hybrid work model. Relocation required.
Remote employees will not be considered for this position.
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.
Lilly is proud to be an EEO Employer and does not discriminate on the basis of age, race, color, religion, gender identity, sex, gender expression, sexual orientation, genetic information, ancestry, national origin, protected veteran status, disability, or any other legally protected status.
Our employee resource groups (ERGs) offer strong support networks for their members and are open to all employees. Our current groups include: Africa, Middle East, Central Asia Network, Black Employees at Lilly, Chinese Culture Network, Japanese International Leadership Network (JILN), Lilly India Network, Organization of Latinx at Lilly (OLA), PRIDE (LGBTQ+ Allies), Veterans Leadership Network (VLN), Women’s Initiative for Leading at Lilly (WILL), enAble (for people with disabilities). Learn more about all of our groups.
Actual compensation will depend on a candidate’s education, experience, skills, and geographic location. The anticipated wage for this position is
$154,500 - $226,600
Full-time equivalent employees also will be eligible for a company bonus (depending, in part, on company and individual performance). In addition, Lilly offers a comprehensive benefit program to eligible employees, including eligibility to participate in a company-sponsored 401(k); pension; vacation benefits; eligibility for medical, dental, vision and prescription drug benefits; flexible benefits (e.g., healthcare and/or dependent day care flexible spending accounts); life insurance and death benefits; certain time off and leave of absence benefits; and well-being benefits (e.g., employee assistance program, fitness benefits, and employee clubs and activities).Lilly reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion and Lilly’s compensation practices and guidelines will apply regarding the details of any promotion or transfer of Lilly employees.
#WeAreLilly