Page 1 of 4
Artificial Intelligence – Understanding the Technology and Its Risks, and Managing them
Successfully
Artificial Intelligence is introducing transformational changes across all aspects of our society. It is
delivering insights into complex problems that have eluded humans, providing unique capabilities that in
some cases replace humans, and in others outperform humans. Your business most likely has begun to
consider or even to adopt AI. But AI is not well understood and many are uncertain about its use,
effectiveness and suitability.
Despite these concerns, AI is being rapidly adopted by organizations. In many cases, the pervasive
promotion in the media, the perceived superman-like benefits, and the ease with which AI technology can
be accessed are driving adoption faster than the risks can be understood and controls put in place to mitigate
and manage those risks.
This two-day seminar provides auditors, risk managers and security professionals with an understanding of
what AI is (and what it is not), its risks, the methods for auditing and managing those risks and achieving
enterprise value from AI, and ultimately how to be a leader in the successful adoption of AI.
The course then builds on this knowledge, presenting the methods for assessing and managing the risks of
AI projects, from inception through design, development, deployment and into ongoing operational use.
The course introduces the participants to important relevant regulatory standards and key industry
frameworks. Best practices from industry are be introduced and discussed.
The course additionally highlights the unique cyber-security risks, aspects and considerations of AI
systems.
Intended audience: I.T. Auditors, Risk Managers, Security Professionals, Internal and External Auditors,
IT Management and Staff, Business Managers Business Continuity and Disaster Recovery Managers,
Financial Executives, Legal Counsel
Learning objectives: Participants in this seminar will learn:
• What AI and Generative AI are, how they work, and how they can fail
• How AI impacts an organization
o How an organization may be transformed by AI
o What steps an organization must take to prepare itself for adoption of AI
o How key roles in an organization are transformed by the adoption of AI
o How to adopt AI successfully
• AI risk management
o What the risks of AI are
o How to audit and manage AI risks
o How to construct a control framework for AI
• How to promote successful adoption of AI
o Governance
o Data sources and value
o Project selection and management
o Effective auditing of risks and controls
Page 2 of 4
• The risks associated with developing, deploying, and operating AI systems.
• Best practices for controlling the risks of AI projects, systems, and their ongoing use in production.
• Essential goals and techniques for conducting an AI Audit.
• Best practices that provide relevant guidance, including:
̶ An AI System Development Methodology and Project Lifecycle.
̶ Regulatory statutes and frameworks relevant to AI.
̶ Frameworks relevant to an AI audit.
Seminar outline - Day 1: Audit and Control of Enterprise Artificial Intelligence (AI)
A. AI Background
a. What is AI (and what is not AI)?
b. Why AI is essential to a business today
c. Benefits and risks
B. Understanding how AI works (and fails)
a. Rule-based AI
b. Data/Deep Learning-based AI
c. Composite AI
d. AI can get it wrong
e. Adversarial Intelligence
f. Key takeaways
C. Preparing the Organization for AI
a. Governance
b. Role of the Board
c. The AI Ethics Committee
d. Divisional Roles and transformation
e. Case Study #1
D. Launching an enterprise AI Program
a. Objectives
b. Team and Project selection
c. Critical success factors
d. Risks
e. Controls
f. Case Study #2
E. Identifying and Managing AI Risk
a. Axiomatic risk
b. Corollary risk
c. Adversarial AI risk (cyber)
d. Identifying and managing risk
e. Case Study #3
Page 3 of 4
Seminar outline – Day2: Audit and Control of AI Implementation
A. Auditing AI – Why?
a. The need for auditing AI
b. The value proposition for an AI audit
c. The goals and objectives of an AI audit
B. Understanding AI and Generative AI (GenAI) – A Primer
a. What is AI?
b. How AI works
c. How AI fails
C. The Risks of Artificial Intelligence
a. Can we trust AI?
b. Ensuring that AI is ethical
c. Learning from some well-known failures
d. How AI risks differ from legacy IT project risks
D. Best Practices for Controlling AI and Project Risks
a. Control objectives
b. Best practices
c. Axiomatic and corollary AI risks
d. Industry examples of success and failure
e. Case study #4
E. Conducting an AI Audit
a. Defining an AI audit
b. Determining scope and objectives
c. Key components to audit and how to do so
d. Case study #5
F. The AI System Development Methodology and Project Lifecycle
a. Project workstream and project approval
b. Data-build workstream
c. Model-build workstream
d. Train, test, validate workstream
e. Case study #6
G. Regulatory Statutes Relevant to an AI Audit
a. General Data Protection Regulation
b. California Consumer Privacy Act
c. Current EU Legislation on AI
d. NYC Bias Audit Law
H. Frameworks Relevant to AI
a. NIST AI Risk Management Framework 1.0
b. IIA AI Auditing Framework
c. NIST Taxonomy of Adversarial Intelligence
d. Cross Industry Standard Process for Data Mining
e. AIRS+ Framework for Identifying and Classifying Implementation Risk
f. GAO: An AI Accountability Framework
g. Other relevant frameworks.
Page 4 of 4
I. AI and Cyber-Security
a. AI Expands the Range of Attack Surfaces
b. Within AI Development, There are Many Direct and Indirect Attack Vectors
c. Attack strategies and their effects are complex
d. Detection is as important as protection
J. Conclusion
Seminar logistics: Because of the intensive case study workshops in all Risk Masters courses, attendees
obtain the maximum benefit when attendance is limited to approximately 75 people.
Contact:
Allan M. Cytryn, Principal,
acytryn@riskmastersintl.com
(201) 803-1536
Steven Ross, Executive Principal,
stross@riskmastersintl.com
(917) 837-2484