AI regulation isn't only for legal teams. Understanding the policy landscape matters because:
The European Union's AI Act is the world's most comprehensive AI regulation. Key elements:
Risk tiers: | Risk level | Examples | Requirements | |---|---|---| | Unacceptable risk | Social scoring by governments, manipulation of vulnerable groups | Banned outright | | High risk | AI in hiring, credit, healthcare, critical infrastructure | Strict requirements: transparency, human oversight, accuracy, safety | | Limited risk | Chatbots, image generators | Transparency requirements (must disclose AI-generated content) | | Minimal risk | Spam filters, AI video games | No specific requirements |
General Purpose AI (GPAI): Foundation models (GPT-4, Claude, Gemini) are subject to obligations including transparency reporting on training data and capabilities.
The United States has taken a more fragmented, sector-specific approach:
| Sector | Key rule |
|---|---|
| Employment | EEOC guidance: AI screening tools can create illegal discrimination even if not intentionally designed to |
| Consumer credit | Fair Credit Reporting Act + Consumer Financial Protection Bureau guidance on AI underwriting |
| Healthcare | FDA oversight of AI-assisted medical devices and diagnostics; HIPAA applies to health data AI processes |
| Finance/Banking | OCC, FDIC guidance on model risk management applies to AI systems |
Major approaches globally: - EU: Comprehensive risk-based law (AI Act) - UK: Sector-specific, principles-based (no single AI law) - China: Specific regulations for generative AI, recommendation algorithms, and deepfakes - US: Fragmented, sector-specific with voluntary frameworks
AI regulation isn't only for legal teams. Understanding the policy landscape matters because:
The European Union's AI Act is the world's most comprehensive AI regulation. Key elements:
Risk tiers: | Risk level | Examples | Requirements | |---|---|---| | Unacceptable risk | Social scoring by governments, manipulation of vulnerable groups | Banned outright | | High risk | AI in hiring, credit, healthcare, critical infrastructure | Strict requirements: transparency, human oversight, accuracy, safety | | Limited risk | Chatbots, image generators | Transparency requirements (must disclose AI-generated content) | | Minimal risk | Spam filters, AI video games | No specific requirements |
General Purpose AI (GPAI): Foundation models (GPT-4, Claude, Gemini) are subject to obligations including transparency reporting on training data and capabilities.
The United States has taken a more fragmented, sector-specific approach:
| Sector | Key rule |
|---|---|
| Employment | EEOC guidance: AI screening tools can create illegal discrimination even if not intentionally designed to |
| Consumer credit | Fair Credit Reporting Act + Consumer Financial Protection Bureau guidance on AI underwriting |
| Healthcare | FDA oversight of AI-assisted medical devices and diagnostics; HIPAA applies to health data AI processes |
| Finance/Banking | OCC, FDIC guidance on model risk management applies to AI systems |
Major approaches globally: - EU: Comprehensive risk-based law (AI Act) - UK: Sector-specific, principles-based (no single AI law) - China: Specific regulations for generative AI, recommendation algorithms, and deepfakes - US: Fragmented, sector-specific with voluntary frameworks