AI Policy and Regulation Landscape (Topic 2) in Module 3 – AI-Economy (BG)

AI Policy and Regulation Landscape

Why Regulation Matters to Non-Lawyers

AI regulation isn't only for legal teams. Understanding the policy landscape matters because:

  1. Compliance risk: Using AI in regulated domains (healthcare, finance, employment) without understanding applicable rules creates liability
  2. Vendor selection: Regulated industries need AI vendors with appropriate compliance certifications
  3. Opportunity: Well-implemented compliance creates trust advantages
  4. Participation: As a citizen, you have stakes in how AI is regulated in your society

The EU AI Act (2024)

The European Union's AI Act is the world's most comprehensive AI regulation. Key elements:

Risk tiers: | Risk level | Examples | Requirements | |---|---|---| | Unacceptable risk | Social scoring by governments, manipulation of vulnerable groups | Banned outright | | High risk | AI in hiring, credit, healthcare, critical infrastructure | Strict requirements: transparency, human oversight, accuracy, safety | | Limited risk | Chatbots, image generators | Transparency requirements (must disclose AI-generated content) | | Minimal risk | Spam filters, AI video games | No specific requirements |

General Purpose AI (GPAI): Foundation models (GPT-4, Claude, Gemini) are subject to obligations including transparency reporting on training data and capabilities.

US AI Governance

The United States has taken a more fragmented, sector-specific approach:

  • NIST AI Risk Management Framework (voluntary): A best-practice framework any organization can adopt for responsible AI
  • Executive Order on AI (2023): Required federal agencies to manage AI safety; encouraged companies to share safety testing with government
  • Sector regulation: FTC, EEOC, banking regulators, and FDA are applying existing laws to AI use in their domains (consumer protection, employment discrimination, drug approval, etc.)

Sector-Specific Rules to Know

Sector Key rule
Employment EEOC guidance: AI screening tools can create illegal discrimination even if not intentionally designed to
Consumer credit Fair Credit Reporting Act + Consumer Financial Protection Bureau guidance on AI underwriting
Healthcare FDA oversight of AI-assisted medical devices and diagnostics; HIPAA applies to health data AI processes
Finance/Banking OCC, FDIC guidance on model risk management applies to AI systems

The Global Landscape

Major approaches globally: - EU: Comprehensive risk-based law (AI Act) - UK: Sector-specific, principles-based (no single AI law) - China: Specific regulations for generative AI, recommendation algorithms, and deepfakes - US: Fragmented, sector-specific with voluntary frameworks

Sign in to join the discussion.
Recent posts
No posts yet.

AI Policy and Regulation Landscape

Why Regulation Matters to Non-Lawyers

AI regulation isn't only for legal teams. Understanding the policy landscape matters because:

  1. Compliance risk: Using AI in regulated domains (healthcare, finance, employment) without understanding applicable rules creates liability
  2. Vendor selection: Regulated industries need AI vendors with appropriate compliance certifications
  3. Opportunity: Well-implemented compliance creates trust advantages
  4. Participation: As a citizen, you have stakes in how AI is regulated in your society

The EU AI Act (2024)

The European Union's AI Act is the world's most comprehensive AI regulation. Key elements:

Risk tiers: | Risk level | Examples | Requirements | |---|---|---| | Unacceptable risk | Social scoring by governments, manipulation of vulnerable groups | Banned outright | | High risk | AI in hiring, credit, healthcare, critical infrastructure | Strict requirements: transparency, human oversight, accuracy, safety | | Limited risk | Chatbots, image generators | Transparency requirements (must disclose AI-generated content) | | Minimal risk | Spam filters, AI video games | No specific requirements |

General Purpose AI (GPAI): Foundation models (GPT-4, Claude, Gemini) are subject to obligations including transparency reporting on training data and capabilities.

US AI Governance

The United States has taken a more fragmented, sector-specific approach:

  • NIST AI Risk Management Framework (voluntary): A best-practice framework any organization can adopt for responsible AI
  • Executive Order on AI (2023): Required federal agencies to manage AI safety; encouraged companies to share safety testing with government
  • Sector regulation: FTC, EEOC, banking regulators, and FDA are applying existing laws to AI use in their domains (consumer protection, employment discrimination, drug approval, etc.)

Sector-Specific Rules to Know

Sector Key rule
Employment EEOC guidance: AI screening tools can create illegal discrimination even if not intentionally designed to
Consumer credit Fair Credit Reporting Act + Consumer Financial Protection Bureau guidance on AI underwriting
Healthcare FDA oversight of AI-assisted medical devices and diagnostics; HIPAA applies to health data AI processes
Finance/Banking OCC, FDIC guidance on model risk management applies to AI systems

The Global Landscape

Major approaches globally: - EU: Comprehensive risk-based law (AI Act) - UK: Sector-specific, principles-based (no single AI law) - China: Specific regulations for generative AI, recommendation algorithms, and deepfakes - US: Fragmented, sector-specific with voluntary frameworks

Sign in to join the discussion.
Recent posts
No posts yet.
Info
You aren't logged in. Please Log In or Join for Free to unlock full access.