TwinLadder Weekly
Issue #13 | August 2025
Editor's Note
I ran a due diligence exercise last quarter on a mid-market acquisition — roughly 4,000 documents in the data room, two jurisdictions, compressed timeline. We used AI for the initial triage and clause extraction. It worked well. It also failed in ways I want to be honest about, because the marketing around AI due diligence tools consistently overpromises.
Every vendor in this space claims to "revolutionise" M&A review. Luminance claims to analyse thousands of documents in hours rather than weeks. Those claims are not false. But they describe one phase of due diligence — document triage and clause extraction — while implying they describe the whole process. The whole process requires something AI cannot do: understanding what matters for this specific deal, this specific buyer, this specific market context.
Here in Europe, where cross-border M&A routinely involves three or four legal systems, the gap between what AI promises and what it delivers is wider than the vendors acknowledge. I want to draw that line clearly, because the firms getting the best results from AI due diligence are the ones that understand where the line falls.
[HIGH CONFIDENCE]
AI in M&A Due Diligence: What It Actually Does (and Does Not)
Luminance is built on a proprietary Legal Large Language Model trained on over 150 million verified legal documents. It classifies documents by type, extracts over 1,000 pre-defined legal concepts (change of control provisions, assignment restrictions, termination rights, IP clauses), and detects anomalies — the outlier contracts with non-standard terms.
In one Bird & Bird case study, two associates reviewed employment contracts covering 20,000 employees in two weeks, analysing 3,600 documents per hour versus 79 manually. Those numbers are real. But let me contextualise them from practice.
| AI Due Diligence Capability | Practical Reality |
|---|---|
| 3,600 docs/hour triage speed (Bird & Bird case study) | Setup and calibration takes days for domain-specific deals |
| 1,000+ clause types extracted automatically | Materiality assessment still requires human judgment |
| Anomaly detection across hundreds of contracts | 40%+ of legacy data rooms contain scan-quality issues |
| 26-second review per commercial agreement (industry benchmark) | Cross-document synthesis remains a fundamentally human task |
Where AI genuinely excels — and I have seen this work:
Document triage. The traditional approach — associates opening each document, identifying its type, categorising for review — takes days for a 10,000-document data room. AI classifies documents within hours of upload. This is an unambiguous win. Associates start substantive review immediately instead of spending their first week on sorting.
Finding known issues at scale. "Flag every contract with a change of control provision triggered by this acquisition." In a deal with 500 contracts, AI surfaces all instances in minutes. A human reviewer doing this sequentially misses the clause buried on page 47 of the obscure licence agreement. AI does not.
Anomaly detection. Reviewing 200 vendor contracts, AI identifies that 198 follow standard terms but two contain unusual limitation of liability provisions. Pattern fatigue causes humans to miss exactly this kind of deviation. AI catches it reliably.
Where AI falls short — and I have experienced this too:
Business context. AI can identify a change of control clause in a minor software licence. It cannot assess that the EUR 50,000-per-year software is easily replaceable and the provision is immaterial. It flags the clause identically to one in a critical manufacturing contract. Without proper filtering, the volume of AI flags overwhelms the review team.
Cross-document synthesis. The asset purchase agreement excludes certain IP. The licence agreement grants rights to that IP. The disclosure schedules reference both. Understanding how these provisions interact requires connecting dots across documents — a fundamentally human analytical task that AI cannot perform without pre-existing templates. In European cross-border deals, add the complexity of interacting legal systems and the problem compounds.
Non-standard formats. Real data rooms contain document soup: scanned contracts from the 1990s, handwritten amendments, multiple languages, legacy formats. Luminance works primarily with Microsoft Word, requiring format conversion that introduces its own errors and delays. European data rooms are particularly prone to multilingual complexity — a German parent with Polish subsidiaries and Czech joint ventures produces documents in four languages with different contractual conventions.
Negotiation judgment. AI flags 47 contracts with inadequate indemnification provisions. The experienced deal lawyer knows which are worth renegotiating, which to address through escrow, and which to accept as deal cost. That judgment is the product — the flagging is the input.
The honest summary: AI makes due diligence more efficient for the volume phases. It does not replace the judgment phases. The firms I have seen struggle with AI due diligence are the ones that conflated efficiency with completeness — that treated AI output as a finished analysis rather than raw material requiring human synthesis.
I worked on one real estate portfolio acquisition where AI flagged hundreds of "issues" that any experienced property lawyer would have immediately recognised as standard market terms. The system was not tuned to real estate norms. Associates spent days validating AI flags that partners would have dismissed on sight. The tool created work rather than saving it, because nobody assessed whether the system was calibrated for the specific domain before deployment.
This connects to a broader competence concern. Under the EU AI Act, which enters full enforcement in August 2026, organisations deploying AI systems — including law firms using AI for client work — bear responsibility for ensuring those systems are used competently. Article 4 already requires documented AI literacy for anyone deploying AI tools. If your associates cannot critically evaluate AI due diligence output, the efficiency gains come with a compliance risk that extends beyond malpractice into regulatory territory.
The lesson is practical: assess data room quality and domain fit before committing to an AI approach. Not every deal benefits from AI triage. And the setup time for proper configuration is real, not trivial.
The Competence Question
A second-year associate on a recent deal told me she was "confident in the AI's coverage" of the contract review. When I asked her to walk me through the change-of-control analysis for the target's three most material commercial relationships, she could not do it without referring to the AI's output. She knew what the AI had flagged. She did not understand the underlying provisions well enough to assess whether the AI's categorisation was correct.
This is the competence risk that AI due diligence introduces. The traditional model — associates reading every contract, even the tedious ones — was inefficient but developmental. It forced junior lawyers to encounter the full spectrum of commercial arrangements, to recognise standard market terms through repetition, and to develop the pattern recognition that ultimately distinguishes a competent deal lawyer from a document reviewer.
If AI handles triage and extraction, what builds that judgment in the next generation of transactional lawyers? The answer has to be deliberate training: having associates review AI output critically, requiring them to explain why a flagged provision matters (or does not), asking them to identify what the AI might have missed. Efficient due diligence and competent development are not automatically compatible. They require intentional design.
European firms face an additional dimension here. Article 4 of the EU AI Act does not merely suggest AI literacy — it mandates it. A firm deploying AI due diligence tools without ensuring its associates can critically evaluate the output is not just risking quality. It is risking compliance.
What To Do
-
Assess data room quality before selecting tools. If your data room contains significant legacy documents, scanned PDFs, or multilingual content, factor format conversion time into your timeline. AI does not eliminate this friction — it front-loads it.
-
Calibrate AI for your deal type. Out-of-box configurations generate noise for specialised practice areas. Before deployment, review the system's default settings against your domain's standard market terms. Remove or deprioritise flags for provisions that are normal in your sector.
-
Require materiality assessments from humans, not AI. Use AI for the "what" — identifying provisions across large document sets. Reserve the "so what" for experienced lawyers who understand business context and deal dynamics.
-
Build junior lawyer training into AI-assisted workflows. Have associates review a meaningful sample of AI "no issue" documents to verify completeness and develop their own pattern recognition. The time investment is modest; the developmental return is significant.
-
Document your AI due diligence governance. Under the EU AI Act's Article 4 obligations, firms deploying AI tools need documented processes for oversight, verification, and competency. Build this into your deal playbook now — before enforcement makes it urgent.
Quick Reads
-
Luminance raises $75M Series C (February 2025) — the leading M&A-focused AI platform continues to build, with an eye toward AI-versus-AI contract negotiation.
-
RSGI/Harvey adoption report — transaction work is the most frequently cited AI use case at law firms (25 mentions), with due diligence triage leading adoption.
-
LegalonTech: Best AI contract review tools 2025 — independent ranking with Luminance scoring 86/100 for M&A, alongside assessments of Kira Systems, Harvey, and newer entrants.
-
Harvey top use cases — Harvey's own account of how firms are using the platform for transactional work, including due diligence and deal management.
One Question
If the purpose of junior lawyer due diligence was always partly developmental — learning to read contracts, recognise patterns, assess materiality — what replaces that training when AI handles the reading?
TwinLadder Weekly | Issue #13 | August 2025
Compliance is the floor. Competence is the mission.
