Due Diligence and Ongoing Monitoring

 View Only
  • 1.  AI Due Diligence

    Posted 05-02-2024 11:59 AM

    I have been working on creating an AI risk assessment and stakeholder scoping questionnaire for solutions/tools incorporating AI.  I am also trying to develop a standardized due diligence package we can request from vendors to risk access the AI use case and ensure the vendor has implement policies/procedures to appropriately address risks.

    Has anyone developed a listing of AI specific due diligence that you request from vendors who are providing an AI or AI enabled solution? So far my draft due diligence request list includes:

    data retention policy

    data privacy policy

    change management policy

    independent audits or validation of AI model outputs

    penetration testing

    evidence of the implementation, effectiveness, and maintenance of security controls within the AI system

    configuration management plan

    Third Party Risk Management (TPRM) policy

    Data Processing Agreement (DPA) that outlines data protection responsibilities

    White paper or other documentation of AI design, theory and logic

    Thanks,

    Shelly



    ------------------------------
    Shelly Chase
    VP Operational Risk
    ------------------------------


  • 2.  RE: AI Due Diligence

    Posted 05-08-2024 05:57 AM

    Hi Shelly,
    I think you are off to a great start here. Of course, the information you need to gather for AI due diligence may vary based on the actual use case. But for general AI considerations, I might also add the following:


    Legal and Regulatory Compliance: Ensure the vendor complies with applicable laws (e.g., GDPR, CCPA) and industry standards.
    Commitment to Ethical AI: Reviewing an actual AI policy or other documented commitment to ethical AI practices detailing how they address bias, fairness, and transparency
    •Bias Mitigation: Documentation addressing how they address bias in AI models during training and deployment.
    •Training Data details: Understand where their training data comes from and whether it's diverse and representative.
    Explicability Documentation: It's essential to understand how AI models arrive at their outcomes, so the process must be explainable (and auditable). For this reason, I would also ask for XAI (Explainable AI) documentation that includes processes and methods that provide clear, human-understandable explanations for AI and machine learning model decisions.
    Two common methods are: 
    •LIME (Local Interpretable Model-agnostic Explanations): Explains individual predictions by approximating the model locally.
    •SHAP (SHapley Additive exPlanations): Computes feature importance based on Shapley values.

    Again, much will depend on the actual risks posed by the AI in the specific product or service and use case.
    I hope this information is helpful. As this is a new area for many, I would love to hear from other members with relevant experience to share.