Hello Courtney,
I would agree more with your second point – the AI model should be evaluated for inherent risk. Since AI is still relatively new and evolving, it's important to recognize that the models used can be flawed with biased and/or incorrect data, or intentionally manipulated. For example, there's been evidence of some organizations using biased algorithms for their hiring systems. Their models were trained on resumes that were mostly submitted by men, so the algorithm was unintentionally biased against women. Validating this model may not have shown any errors since the data it was drawing from was technically correct, but biased.
However, AI risk can expand beyond model validation and include other risk areas like privacy, cybersecurity, ethics, and even business continuity and compliance. Before you perform due diligence, it may help to collaborate with a qualified subject matter expert and consider some questions to ask the vendor who's providing the AI model. You might ask questions such as where the data is sourced from, how often the system is updated with additional training data, and how they're protecting the integrity of their system.
I hope this can give you a good starting point and I'm eager to know how others are dealing with evaluating AI models, as this can be a very complex issue.
Original Message:
Sent: 09-10-2024 06:12 PM
From: courtney medina
Subject: Assessing risk of third-party Artificial Intelligence
I'm curious how other organizations are assessing risks of third-party AI. There are different schools of thought that the AI Model itself isn't the risk, it's the product of what the AI produces/is used for, therefore the risk is not fully assessed until full model validation has occurred (often lengthy timeline that doesn't always line up to contracting timeline).
Alternately, there is thinking that the AI model should be evaluated for inherent risk of using AI services and due diligence should be performed on the vendor providing the AI model just as you would any other risk area. I'm curious - do other organizations treat AI model as a risk area just like any other, or separately?