TWINLADDER
TwinLadder
TWINLADDER
Regulation (EU) 2024/1689

EU AI Act Explorer

The world's first comprehensive AI regulation. Navigate articles, track implementation deadlines, and understand what matters for legal practice.

11+

Articles

3

Annexes

7

Key for Lawyers

Official Reference

Full Title

Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence

CELEX

32024R1689

OJ Reference

OJ L, 2024/1689, 12.7.2024

Entry into Force

8/1/2024

Full Applicability

8/2/2027

Implementation Timeline

View full timeline
Aug 24
Entry
Feb 25
Prohibited
Aug 25
Governance
Aug 26
High-Risk
Aug 27
Full

Praktiskie jautajumi

Kad ES MI Akta 4. pants nosaka "MI pratibu" juridiskajiem specialistiem, tas neaicina juristus klut par datu zinatniekiem vai programmaturos inzenieriem. Regulejums atzist fundamentalu patiesibu: makslIgajam intelektam klustot par dalu no juridiskajam darbplusmam, praktikiem jaattista pietiekama izpratne, lai sos rikus lietotu kompetenti, etiski un atbilstosi profesionalajiem pienakumiem.

MI pratiba juridiskaja profesija nozime prasmju, zinasanu un izpratnes ieguvsanu, kas nepieciesamas informetu lemumu pienemsanai par MI sistemu ieviesanu prakse. Tas ietver izpratni gan par iespejam, ko MI sniedz, gan par riskiem, ko tas rada — no efektivitates ieguvumiem lidz iespejamiem etikas parkapumiem.

Juristiem tas nozime izprast, ko MI riki dara vinu konkretaja konteksta, kad ir lietderigi tos izmantot, ka parbaudit to izvades datus un kadi riski jamazina. Tas neprasa izprast matematiskos algoritmus vai neironu tiklu arhitekturas, kas ir so sistemu pamata.

Ka MI riki ietekme dazadas prakses jomas

Tiesvediba un stridu risinasana

MI riki tiesvediba tagad palidz ar judikaturos izpeti, dokumentu parskatisanu un prognozejoso analitiku. Juristiem ir jasaprot, ka generativais MI var radit neesosas lietu atsauces, un cik svarigi ir parbaudit katru MI generetu citatu un juridisko apgalvojumu.

Darijumu un korporativas tiesibas

Ligumu sagatavovsana, pienaciga parbaude un regulativa atbilstiba arvien vairak ietver MI palidzibu. MI pratiba nozime izprast, ka ligumu analizes MI identifce klauzulas, un atzit, ka MI genereta liguma valoda prasa cilveka parskatisanu atbilstibas novertesanai.

Intelektualais ipasums

Intelektuala ipasuma praktikiem, kas izmanto MI precu zimju meklesanai, patentu analizei vai autortiesIbu novertesanai, ir nepieciesama specializeta pratiba, tostarp izpratne par to, ka MI meklesanas riki atskiras no tradicionalajam Bula meklesanam.

Konsultacijas un regulativa atbilstiba

Juristiem, kas konsulte regulativajos jautajumos, ir nepieciesama pratiba, kas ietver MI riska limenu klasifikacijas sistemu izpratni, zinasanas par nozarei specifiskiem MI regulejumiem un kompetenci konsultet par MI specifiskiem ligumisku noteikumiem.

2025. gada novembra Darmstates precedents

Darmstates Apgabaltiesas spriedums Vacija radija specIgu precedentu: kad tiesa nozimets medicinas eksperts plasi izmantoja MI bez atklasanas, tiesa noteica eksperta honoraru nulles eiro apmera un pasludinaja visu zinojumu par nepielauJamu. Si lieta uzsver, ka MI pratiba ietver izpratni par to, kad un ka atklat MI izmantosanu.

Valstu salidzinajums

Lai gan ES MI Akts izveido harmonizetu regulativo ietvaru dalibvalstis, ta ieviesana atklaj butiskas atskirigas taja, ka atseviskas valstis pieiet MI regulesanai juridiskajiem specialistiem.

Italija: pirmais solis ar Likumu 132/2025

Obligata atklasanas prasiba

Italija izcelas ka pirma ES dalibvalsts, kas pienema visaptverosa valsts MI likumdosanu. Likums 132/2025, kas stajas speka 2025. gada 10. oktobri, prasa Italijas juristiem informet klientus ikreiz, kad parstavibas gaita tiek izmantotas MI sistemas.

Vacija: tiesu precedentu pieeja

Darmstates tiesas precedents

Darmstates Apgabaltiesas 2025. gada 10. novembra spriedums noteica, ka tiesa nozimeta eksperta honorars janosaka nulles eiro apmera, ja eksperts plasi palaujas uz MI bez atklasanas.

Atklasana tagad ir obligata visiem ar tiesu saistitiem iesniegumiem

Baltijas valstis: koordineta ieviesana

Latvija, Lietuva un Igaunija koordine savu pieeju MI regulejuma ieviesanai, atzistot juridisko pakalpojumu parrobezu raksturu Baltijas regiona.

Skatit pilnu ES dalibvalstu ieviesanas izsekotaju

Izsekot ieviesanas statusu visas 27 dalibvalstis

Musu pozicija

Atrautiba starp to, ka MI riki tiek izstradati, un to, ka juridiskajiem specialistiem tie ir jaizmanto, rada fundamentalu izaicinajumu. MI sistemas veido inzenieri, kas doma algoritmu, apmacibu datu un modelu arhitekturu kategorijas. Tomer sos rikus ir jaizmanto juristiem, kas doma juridisko precedentu, klientu interesu un profesionalas etikas kategorijas.

Juristiem nav jasaprot, ka MI darbojas. Vieniem jasaprot, ko MI dara vinu konkretajos juridiskajos kontekstos un ka to atbildigi izmantot profesionalajos ietvaros.

Kapec komforts ir svarigaks par kodu

TwinLadder pieeja sakas ar fundamentalu atzisanu: juridiskie specialisti nav tehniski lietotaji un vini nav jamaca ta, it ka vini tadi butu. Netehniskiem lietotajiem nav datorzinatnu pieredzes, vini doma nozarei specifiskos terminos un vislabak macas, pielietojot zinasanas pazistamam problemam.

Tehniska apmaciba neder

  • ×Neatbilstosa informacija, kas nav nepieciesama kompetentai lietosanai
  • ×Biedejosa sarezgitiba rada barjeras
  • ×Atri aizmirst bez praktiskas pielietosanas
  • ×Laiks netiek pavadits praktisku kompetencu attIstisanai

Uz darbplusmam balstita macisanas darbojas

  • Fokuss uz to, ka MI ietekme juridiskas darbplusmas
  • MI izvades uzticamibas novertesana
  • Parbauodes soli pirms palaujanas uz MI
  • Profesionalo pienakumu uzturesana

Atbilstiba 4. panta likumdosanas nolukam

4. pants fokuseas uz "informetu ieviesanu" un "izpratni par iespejam un riskiem" — nevis uz tehnisko izpratni. Regulejums skaidri nem vera lietotaju "tehniskas zinasanas, pieredzi, izglitibu un apmacibu". TwinLadder apmaciba ir precizi izstradatA sim lietotaja profilam.

Risk-Based Approach

Understanding AI Risk Categories

The EU AI Act classifies AI systems by risk level. Legal AI tools may fall into high-risk or limited-risk categories depending on their use.

Prohibited

AI practices banned outright

  • Social scoring
  • Subliminal manipulation
  • Real-time biometric ID*

* With law enforcement exceptions

High-Risk

Strict requirements apply

  • Justice & legal research AI
  • Employment decisions
  • Credit scoring
  • Critical infrastructure

See Annex III for full list

Limited Risk

Transparency obligations

  • Chatbots & AI assistants
  • Emotion recognition
  • AI-generated content

Must disclose AI use

Minimal Risk

Voluntary codes apply

  • Spam filters
  • Video game AI
  • Inventory management

No mandatory requirements

Essential Reading

Key Articles for Legal Professionals

These articles have direct implications for law firms, in-house counsel, and legal AI vendors.

5

Prohibited AI Practices

Chapter II: Prohibited AI Practices

Key for Lawyers
Prohibited
Moderate Relevance

Bans AI systems for: subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing (individuals), untargeted facial recognition scraping, emotion recognition at work/education, and real-time biometric identification (with law enforcement exceptions).

Relevance to Legal Practice

Legal AI tools are unlikely to fall into prohibited categories, but lawyers should verify tools don't use banned techniques for influence or assessment.

See also:Annex I
Effective: Feb 2, 2025
6

Classification Rules for High-Risk AI Systems

Chapter III: High-Risk AI Systems

Key for Lawyers
High-Risk
Critical for Lawyers

Defines what makes an AI system 'high-risk': either (1) a safety component/product under EU harmonisation legislation in Annex I, or (2) falls under use cases in Annex III. Exceptions for narrow procedural tasks.

Relevance to Legal Practice

AI systems used for 'administration of justice and democratic processes' are HIGH-RISK under Annex III(8). Legal research and case outcome prediction tools may qualify.

Effective: Aug 2, 2026
9

Risk Management System

Chapter III: High-Risk AI Systems

Key for Lawyers
High-Risk
High Relevance

Mandates continuous risk management for high-risk AI: identify risks, implement mitigation, test systems, monitor post-deployment. Must consider reasonably foreseeable misuse.

Relevance to Legal Practice

Lawyers deploying high-risk AI must understand the vendor's risk management. Due diligence should verify compliance.

Effective: Aug 2, 2026
14

Human Oversight

Chapter III: High-Risk AI Systems

Key for Lawyers
High-Risk
Critical for Lawyers

High-risk AI systems must be designed for effective human oversight. Humans must be able to understand outputs, intervene, and override the system. 'Human-in-the-loop' or 'human-on-the-loop' required.

Relevance to Legal Practice

Lawyers MUST maintain oversight of AI outputs. Blind reliance on AI without review violates professional duty and likely this article.

See also:Art. 9Art. 26
Effective: Aug 2, 2026
Critical Dates

Implementation Timeline

The EU AI Act phases in over three years. Track key milestones and prepare your compliance strategy.

August 1, 2024

Entry into Force

The EU AI Act officially enters into force, starting the implementation timeline.

February 2, 2025

Prohibited AI Practices

Ban on AI systems with unacceptable risk: social scoring, manipulation, real-time biometric identification (with exceptions).

August 2, 2025

Governance & GPAI Rules

Current Phase

EU AI Office fully operational. Rules for general-purpose AI models apply. Penalties framework active.

August 2, 2026

High-Risk AI Obligations

150 days

Full compliance required for high-risk AI systems. Conformity assessments, technical documentation, human oversight mandatory.

August 2, 2027

Full Applicability

515 days

All provisions fully applicable. High-risk AI systems in Annex I must comply.

Full Text Reference

Browse by Chapter

Navigate the complete EU AI Act structure with legal practice annotations.

Chapter I

General Provisions

3 articles

Chapter II

Prohibited AI Practices

1 articles

Chapter V

General-Purpose AI Models

1 articles

Chapter XII

Penalties

1 articles
Critical Annexes

Key Annexes

Annexes define high-risk categories and technical requirements.

I

Union Harmonisation Legislation

Lists EU product safety legislation that, when combined with AI as a safety component, triggers high-risk classification under Article 6(1).

III

High-Risk AI Systems

Lists use cases that automatically classify AI as high-risk. Includes: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and JUSTICE/DEMOCRATIC PROCESSES.

XIII

Criteria for Classification of GPAI with Systemic Risk

Criteria for determining if a general-purpose AI model poses systemic risk: training compute >10^25 FLOPs, high-impact capabilities, number of users, cross-border reach.

Track EU Member State Adoption

Monitor which countries have implemented the EU AI Act, designated AI authorities, and published bar association guidance.

3

Implemented

12

In Progress

12

Not Started

Gatavs sasniegt 4. panta atbilstibu?

TwinLadder piedava akreditetas CPD programmas, kas ipasi izstradatas juridiskajiem profesionaliem, kuri orientejas MI regulacija.

Last updated: 2/6/2026

Official EUR-Lex Source