# AI Security

### **Introduction**

MSPs adopting AI must treat it as a new class of SaaS with unique risks. This section outlines the major risk areas, the safeguards MSPs should apply, and the policies needed to govern AI responsibly. Each subpage provides detail, examples, and guardrails.

### Subpages Overview

1. [**Risks & Guardrails for AI in MSP Environments**](https://docs.themspkb.com/ai-for-msps/ai-security/risks-and-guardrails-for-ai-in-msp-environments)\
   Explains the main risks (data, operational, business) and practical guardrails (policy, monitoring, oversight).
2. [**Data Handling & Privacy**](https://docs.themspkb.com/ai-for-msps/ai-security/data-handling-and-privacy)\
   Covers how AI tools process, store, and transmit data; residency and training risks; anonymization, tenant isolation, and contractual safeguards.
3. [**Operational Safeguards & Oversight**](https://docs.themspkb.com/ai-for-msps/ai-security/operational-safeguards-and-oversight)\
   Details practical controls: human-in-the-loop enforcement, sandbox testing, incident response, logging, and AI-native security layers.
4. [**AI Governance & Acceptable Use Policies**](https://docs.themspkb.com/ai-for-msps/ai-security/ai-governance-and-acceptable-use-policies)\
   Guidance on writing internal and client-facing policies, managing shadow AI, defining augmentation vs automation, and training users.

### **Bottom Line**

MSPs can safely adopt AI by following structured governance: identify risks, secure data handling, enforce clear policies, and maintain oversight.
