TWINLADDER
TwinLadder logoTwinLadder
Back to Insights

Implementation Guides

AI Competence Self-Assessment: A 40-Point Checklist

You cannot build what you cannot measure. This checklist tells you exactly where your AI competence stands — and where it does not.

2026. gada 8. janvārisLīga Pauliņa, Līdzdibinātāja un TwinLadder Akadēmijas direktore8 min read
AI Competence Self-Assessment: A 40-Point Checklist

AI Competence Self-Assessment: A 40-Point Checklist

You cannot build what you cannot measure. This checklist tells you exactly where your AI competence stands — and where it does not.


I designed this checklist because I was tired of vague conversations about AI readiness. Organisations tell me their people are "fairly competent" with AI or "getting there." Those phrases mean nothing. Competence either exists or it does not, and the only way to know is to measure it against specific, observable capabilities.

This is not a feel-good exercise. Some of these questions will be uncomfortable. That is the point. The gaps you identify here are exactly the gaps that Article 4 of the EU AI Act requires you to close.

Be honest with yourself. Nobody sees your score but you.

Part 1: Technical Literacy (10 points)

Foundation concepts — Do you understand how these tools actually work?

  • Can you explain in plain language what a large language model does — specifically, that it predicts the next most probable word based on patterns in training data?
  • Do you understand why AI tools have knowledge cutoff dates and what that means for legal or regulatory research?
  • Can you describe what RAG (Retrieval Augmented Generation) is and why legal AI tools use it to reduce hallucination?
  • Do you know what "temperature" controls in AI output and why it affects reliability?
  • Can you explain the difference between a foundation model and a fine-tuned model?

Limitation awareness — Do you understand where these tools break?

  • Can you define "hallucination" and explain the mechanism that causes it?
  • Do you understand "sycophancy" — the tendency of AI to agree with your premise even when it is wrong?
  • Can you identify situations where AI output is likely to be unreliable (novel questions, recent law, complex multi-step reasoning)?
  • Do you understand "misgrounding" — when the model produces a correct-sounding statement but attributes it to the wrong source?
  • Can you explain why an AI tool might perform well on common legal questions but poorly on unusual ones?

Scoring: If you checked fewer than 5 items, your technical literacy is below the threshold that Article 4 likely requires. This is the starting point, not an advanced skill.

Part 2: Verification Skills (10 points)

Citation verification — Do you actually check what the tool gives you?

  • Do you independently verify every case citation or statutory reference in AI output against primary sources?
  • Do you confirm that cited authorities actually support the proposition they are cited for?
  • Do you check that legal holdings are accurately characterised — not just that the case exists, but that it says what the AI claims?
  • Do you verify direct quotations against original sources?
  • Do you check currency — whether cited authorities are still good law, not overruled or superseded?

Analysis verification — Can you evaluate the reasoning, not just the citations?

  • Can you identify logical gaps in AI-generated legal analysis?
  • Do you check AI output against your own professional understanding before relying on it?
  • Can you recognise when AI has missed relevant contrary authority?
  • Do you verify that AI correctly applies legal principles to the specific facts of your matter?
  • Can you identify when AI has oversimplified a nuanced legal question?

Scoring: If you checked fewer than 7 items, your verification practice is inadequate for professional AI use. This is the area where most sanctions cases originate.

Part 3: Prompt Engineering and Testing (8 points)

Effective prompting — Can you get reliable output from these tools?

  • Do you structure prompts with clear, specific instructions rather than vague requests?
  • Do you provide relevant context — jurisdiction, practice area, specific facts — to guide the response?
  • Do you know how to request that the AI acknowledge uncertainty rather than fabricate confident answers?
  • Do you understand how to avoid leading the AI toward a desired conclusion?

Adversarial testing — Do you challenge the output?

  • Do you ask follow-up questions to test the robustness of AI reasoning?
  • Do you prompt for contrary authority or counterarguments?
  • Do you run critical queries multiple times to check consistency?
  • Do you test AI responses against known edge cases in your practice area?

Scoring: If you checked fewer than 4 items, you are likely getting lower-quality output than the tools are capable of producing. Prompt engineering is not a luxury — it directly affects the reliability of what you receive.

Part 4: Ethical and Regulatory Compliance (8 points)

Regulatory awareness — Do you know the rules?

  • Have you reviewed the AI guidance applicable to your jurisdiction (bar guidance, court rules, Article 4 requirements)?
  • Do you know whether your jurisdiction requires disclosure of AI use in court filings?
  • Are you aware of continuing education requirements for AI competence in your jurisdiction?
  • Do you understand your obligations under Article 4 of the EU AI Act, if applicable?

Confidentiality — Do you protect client information?

  • Do you know what happens to data entered into the AI tools you use — whether it is stored, used for training, or accessible to third parties?
  • Have you reviewed the terms of service and data handling policies of your AI tools?
  • Do you avoid entering confidential client information into tools that are not approved for that purpose?
  • Do you understand the geographic location of AI processing and its implications for data protection?

Scoring: If you checked fewer than 5 items, you have significant compliance exposure. These are not best practices — they are professional obligations.

Part 5: Ongoing Development (4 points)

  • Do you actively monitor developments in AI regulation and professional guidance?
  • Have you completed structured AI training (not just vendor product demonstrations) in the past 12 months?
  • Do you regularly use AI tools in your professional work and learn from the experience?
  • Do you contribute to your organisation's AI policies, training, or governance?

Scoring: If you checked fewer than 2 items, your competence is static. AI capabilities and regulations are not.

Your Total Score

Add up your checked items across all five parts.

0-15: Critical gaps. Your AI competence is significantly below what professional practice and Article 4 require. Immediate, structured training is needed. Do not use AI for client-facing work until you have addressed the foundational gaps in Parts 1 and 2.

16-25: Developing. You have a foundation but significant areas need work. Focus on the parts where you scored lowest. Most professionals I assess fall in this range — you are not alone, but you are not where you need to be.

26-35: Competent. You have solid AI literacy and good professional practices. Continue monitoring developments and refining your approach. Consider whether you can help colleagues develop their competence.

36-40: Advanced. You have comprehensive AI competence. You should be contributing to your organisation's AI governance — policy development, training delivery, and knowledge sharing.

What To Do With Your Score

A score is only useful if it drives action. Here is what I recommend based on where you land.

If you scored below 16: Enrol in a structured AI competence programme. Not a lunch-and-learn, not a webinar. A programme that covers technical literacy, verification skills, and professional obligations in depth. This is urgent.

If you scored 16-25: Identify the specific parts where you scored lowest. If technical literacy is the gap, invest time in understanding how the tools work. If verification is the gap, build a systematic verification workflow and practise it on every AI interaction. If compliance is the gap, read the relevant guidance documents — ABA Opinion 512, your bar's AI guidance, Article 4 of the AI Act.

If you scored 26-35: Your development is about refinement, not foundation-building. Stay current with regulatory developments. Experiment with more sophisticated uses of AI tools. Push yourself to use adversarial testing more consistently.

If you scored 36-40: Teach others. The most effective way to deepen your own competence is to help others develop theirs. And your organisation needs people who can lead its AI governance efforts.

The Honest Truth

Most professionals I work with score between 15 and 25 on their first assessment. The common pattern is strength in one or two areas — usually prompt engineering and basic tool use — and weakness in the areas that matter most for professional practice: verification, limitation awareness, and regulatory compliance.

The professionals who use AI most frequently are not always the most competent. They are sometimes the most exposed, because frequent use without adequate verification creates more opportunities for error.

This checklist is a starting point. Take it honestly, act on what it reveals, and come back to it in six months to measure your progress.

Competence is built, not claimed.