AI Systems Are Law-Making in Practice
- Jyoti Gogia
- May 19
- 3 min read

AI systems today not only process data—they shape decisions with legal, economic, and reputational consequences. From automated credit scoring to algorithmic trade sanctions and emissions reporting, AI tools are increasingly embedded in state and corporate functions. Yet their legal status is often ambiguous, unaccountable, and difficult to contest.
This post explores the key legal questions surrounding AI systems, the architecture of emerging AI regulation in the EU, and what legal practitioners and compliance architects must do to remain aligned with governance expectations.
1. What is “AI” in Legal Terms?
The EU AI Act defines AI systems broadly as:
“Software that is developed with one or more of the techniques and approaches listed in Annex I and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
Unlike GDPR or PSD2, which regulate data and financial flows, the AI Act regulates autonomy, opacity, and risk.
Key Legal Characteristics of AI Systems:
Operate with non-deterministic outcomes
Rely on probabilistic logic
May include automated decision-making (ADM) components subject to Art. 22 GDPR
2. Core Legal Challenges in AI Governance
A. Explainability and AccountabilityAI outputs often lack a traceable decision logic. This creates tension with legal doctrines such as:
Due process (WTO, constitutional law)
Right to explanation (GDPR, Art. 15–22)
Appeal and redress (fundamental rights)
B. Misclassification and DiscriminationAI systems used in public or quasi-public settings (e.g. credit scoring, emissions verification, job filtering) can reinforce existing biases or generate systemic misclassifications. This implicates:
Equality law
Trade law (non-discrimination under WTO rules)
Environmental law (if used in CBAM or ESG reporting)
C. Jurisdiction and EnforcementAI often operates across borders, making enforcement difficult. Key issues include:
Forum shopping of algorithm developers
Lack of algorithmic audit standards
Conflicts between national regulators (e.g. CNIL vs. EDPB vs. national AI agencies)
3. The EU AI Act: A Risk-Based Compliance Framework
The AI Act introduces a tiered regulatory model based on risk classification:
Risk Level | Example Use Cases | Legal Requirements |
Prohibited | Social scoring, real-time biometric ID | Banned outright |
High-Risk | Credit scoring, hiring algorithms, CBAM | Ex-ante conformity assessments, transparency, post-market monitoring |
Limited Risk | Chatbots, AI-assisted drafting | Disclosure to users |
Minimal Risk | Spellcheck, AI search engines | No regulation beyond existing laws |
High-risk systems must meet extensive obligations:
Risk management systems (Art. 9)
Data governance and documentation (Art. 10–12)
Human oversight mechanisms (Art. 14)
Registration in an EU database (Art. 51)
4. Intersections with Existing Law
AI rarely operates in a regulatory vacuum. Legal professionals must navigate overlaps and contradictions:
GDPR vs AI Act: How do rights under GDPR Art. 22 interact with transparency rules under the AI Act?
TFEU Competition Law vs AI Optimization: Does price-setting via AI algorithms trigger cartel behavior?
ESG Reporting vs Trade Law: Can AI-driven environmental scores be challenged under WTO rules?
Legal professionals must develop frameworks to ensure:
Coherence across legal instruments
Minimisation of liability exposure
Regulatory defensibility (documentation, audit trails)
5. Building AI-Ready Legal & Compliance Structures
To align with the AI Act and broader governance trends, legal teams should:
A. Map AI SystemsCreate a registry of AI and ADM systems in use, including:
Purpose and output type
Human oversight layers
Risk category (per Annex III of AI Act)
B. Embed Governance by Design
Incorporate legal checks in model training and dataset validation
Ensure access logs, override mechanisms, and contestation paths are logged
Translate AI risk controls into product and UX-level interventions
C. Align Internal Policies with External Law
Update RoPAs and DPIAs to account for AI-based profiling
Embed AI-specific clauses in vendor contracts and audits
Monitor EU-level consultations and sector-specific guidance (e.g., from EBA, ECB, ENISA)
Conclusion: From Black Boxes to Legal Frameworks
The legal future of AI is neither self-regulating nor code-only. As the EU leads the global push for algorithmic accountability, legal practitioners must act as architects of governance—translating emerging AI risk into contract terms, reporting protocols, and operational safeguards.
Whether used in fintech, climate reporting, or public services, AI systems must now comply not only with software benchmarks, but with the rule of law itself.
Comments