AI Governance Lead - Operational- Vice President 1
Apex Group Ltd (UK Branch)
The Apex Group was established in Bermuda in 2003 and is now one of the world’s largest fund administration and middle office solutions providers.
Our business is unique in its ability to reach globally, service locally and provide cross-jurisdictional services. With our clients at the heart of everything we do, our hard-working team has successfully delivered on an unprecedented growth and transformation journey, and we are now represented by over circa 13,000 employees across 112 offices worldwide.Your career with us should reflect your energy and passion.
That’s why, at Apex Group, we will do more than simply ‘empower’ you. We will work to supercharge your unique skills and experience.
Take the lead and we’ll give you the support you need to be at the top of your game. And we offer you the freedom to be a positive disrupter and turn big ideas into bold, industry-changing realities.
For our business, for clients, and for you
The AI Risk Assessment Lead / Operational AI Governance Lead is a leadership role within the Office of the Chief AI & Data Officer (CDAO), responsible for operationalizing the enterprise AI Governance and Responsible AI framework.
This role owns the end-to-end AI risk assessment lifecycle, ensuring that all AI use cases, models, and AI agents are systematically assessed for risk, documented, approved, monitored, and continuously governed in line with regulatory, ethical, and business requirements.
This role acts as the central execution authority for AI risk and controls, bridging AI product teams, risk management, compliance, legal, security, and data governance. This is a high-impact, execution-focused role operating in a fast-paced environment where AI systems are being rapidly deployed and must be governed at scale.
Strategic Objectives of the Role
This role exists to:
- Ensure AI systems are safe, compliant, and trustworthy
- Embed risk-by-design into the AI lifecycle
- Enable rapid AI innovation with controlled risk
- Provide audit-ready AI governance
- Protect the organization from regulatory, reputational, and ethical AI risks
Key Responsibilities
1. AI Risk Assessment Framework (Core Mandate)
- Design and operationalize the enterprise AI Risk Assessment Framework.
- Define risk taxonomy across:
- Model risk
- Data risk
- Bias & fairness
- Explainability
- Security & privacy
- Regulatory & ethical risk
- Establish/Update risk scoring and tiering (Low / Medium / High / Prohibited).
2. AI Use Case Intake & Approval
- Own the AI use case intake and triage process.
- Lead risk assessments for:
- New AI use cases
- Model changes and retraining
- AI agents and autonomous systems
- Facilitate AI approval forums and control gates.
- Provide go/no-go recommendations.
3. Model & Agent Governance
- Ensure all AI models and agents are:
- Registered in the AI registry
- Documented with model cards
- Linked to data sources and lineage
- Define operational controls for:
- Human-in-the-loop
- Explainability
- Monitoring and drift
- Kill-switches and fallback mechanisms
4. Regulatory & Audit Readiness
- Ensure alignment with:
- NIST AI RMF
- ISO 42001
- EU AI Act
- Model Risk Management (SR 11-7)
- DORA, GDPR, internal policies
- Prepare AI governance evidence for:
- Regulators
- Internal audit
- External auditors
5. Continuous Monitoring & Control
- Define AI monitoring requirements for:
- Bias and fairness
- Performance drift
- Data drift
- Model degradation
- Partner with Data and AI engineering to implement:
- Alerts
- Thresholds
- Escalation workflows
6. Cross-Functional Influence & Enablement
- Act as the AI risk advisor partnering with Head of AI Governance:
- Product teams
- Risk & Compliance
- Legal
- Security
- Data Governance
- Translate technical AI risks into enablement:
- Business risk language
- Regulatory exposure
- Financial and reputational impact
- Drive adoption of Responsible AI practices across the organization.
Required Experience & Skills
AI & Risk Expertise (Strong Requirement)
- 8–10 years in:
- AI governance
- Model risk management
- Technology risk
- Data risk / compliance
- Strong understanding of:
- ML/GenAI lifecycle
- Model development and deployment
- LLMs, agents, and foundation models
Regulatory & Governance Knowledge
- Deep familiarity with:
- NIST AI RMF
- ISO 42001
- EU AI Act
- SR 11-7 / model risk
- GDPR, DORA
- Experience operating in regulated industries (FS, healthcare, insurance, fintech).
Technical Fluency
- Ability to engage with: Data scientists, ML engineers, Platform teams
- Understanding of:
- Model cards
- Training vs inference pipelines
- MLOps monitoring tools
Communication & Operating Style
- Exceptional executive communication skills
- Ability to influence without authority
- Comfortable operating in a fast-paced, high-ambiguity environment
- Strong facilitation and decision-making under pressure
- Able to balance innovation with risk pragmatically
Disclaimer: Unsolicited CVs sent to Apex (Talent Acquisition Team or Hiring Managers) by recruitment agencies will not be accepted for this position. Apex operates a direct sourcing model and where agency assistance is required, the Talent Acquisition team will engage directly with our exclusive recruitment partners.