What to Know
When Using AI
Includes downloadable templates
Helping Great Companies Get Better at Compliance
If your organization uses AI tools, especially ones made by other companies, you have important responsibilities. This course explains what those responsibilities are and how to handle them in a clear, simple way.
You’ll learn how to check if an AI system is safe to use, what to look out for, and how to make sure people stay involved in important decisions. We use real-life examples to show how this works in practice. For example, if you’re using AI to screen job applications or help customers, we’ll show you how to keep things fair, clear, and under control.
The course takes you through what to do before using an AI tool, how to make sure it stays safe and works well, and how to deal with any problems that come up. You'll also learn how to follow the rules without making things too complicated.
To make things easier, the course includes ready-to-use templates and checklists that help you put what you learn into action. By the end, you’ll know how to use AI responsibly and follow rules with confidence.
This course is designed for anyone responsible for selecting, integrating, using, or overseeing AI systems in an organization, especially those sourced from third parties:
Cybersecurity & IT Risk Professionals – Evaluating the security posture and resilience of AI systems under deployment.
Understand your legal duties : Learn what the EU AI Act requires from deployers and how to meet those obligations.
Manage third-party AI risks : Gain tools to assess and document compliance when using external AI systems.
Apply real-world lessons : Explore practical case studies on responsible AI use and risk management.
Advance your career : Earn a certification that demonstrates your expertise in AI deployment and regulatory compliance.