AI Governance & Compliance Essentials
From Risk to Resilience with AI
Helping Great Companies Get Better at Compliance
This specialized education program is designed to equip your organization with the knowledge and tools needed to responsibly implement and manage artificial intelligence systems across high-risk sectors. By participating in this masterclass, your teams will gain a deep understanding of regulatory expectations, operational safeguards, and risk-based strategies aligned with both domestic and international frameworks.
Through a modular approach, we will address the practical and legal considerations associated with the use of AI in financial services, healthcare, privacy management, cybersecurity, and employment processes.
Upon completion, participants will be able to identify and manage AI-specific risks, implement compliance controls across critical functions, and support the responsible deployment of AI technologies in their organizations.
This education is intended for professionals responsible for implementing, auditing, or overseeing AI systems, data governance, compliance, and risk management functions. Recommended roles include:
1. AI in the Financial Sector
This module focuses on regulatory expectations, risk mitigation, and governance standards for AI in finance. Key topics include the OECD’s five AI principles, the role of the Chief Data Officer in enterprise-level AI oversight, integration of AI within Enterprise Risk Management (ERM), required documentation in financial risk programs, and practical approaches to managing AI-specific risks.
2. AI in the Healthcare Sector
Participants will learn how AI is regulated within medical device development, health data management, and digital therapeutics. The module addresses data governance, the 510(k) clearance pathway, De Novo classification requests, Humanitarian Use Devices (HUD), and Premarket Approval (PMA) processes. It will also cover quality documentation and AI risk assessment obligations for software classified as medical devices.
3. Privacy and AI
This module addresses the intersection of artificial intelligence and privacy regulation. Focus is placed on CPRA (California Privacy Rights Act), HIPAA obligations when processing health data, and technical tools to monitor and reduce privacy risks in user activity, network traffic, virtual desktop environments, and authentication. The module emphasizes risk management and integration of privacy controls in AI system design.
4. AI and Security
Participants will explore the cybersecurity and information security aspects of AI. The session covers risk-based security controls, the AI Risk Management Framework (AI RMF), best practices for cybersecurity in AI environments, alignment with the NIST Cybersecurity Framework (CSF), and the implementation of secure authentication methods, including context-aware MFA strategies.
5. AI and Employment Practices
This module focuses on the use of AI in hiring and workforce monitoring. Covered topics include legal and ethical considerations for AI in recruitment, compliance with anti-discrimination laws and ADA (Americans with Disabilities Act), algorithmic fairness in employment decisions, and the emerging risks tied to wearable technologies and employee tracking in the workplace.
Gain practical expertise in navigating regulatory frameworks that govern AI
Mitigate legal, financial, and reputational risks from AI misuse
Build organizational readiness for audits and supervisory inquiries
Promote transparency and accountability in AI-driven operations
Implement sector-specific risk management programs for AI deployment
Ensure ethical and lawful AI practices across high-impact functions
Develop documentation and controls aligned with international standards
Support digital innovation while maintaining regulatory trust