Note
This brief is not publicly distributed. It goes to practitioners who have been specifically identified as having the depth of experience this research requires. If you are reading this, someone in your network thought your perspective was worth including. If your voice is not in this Index, your peers are defining the legal standard you will be judged against.
A scene that is happening in Australian boardrooms right now
A board chair receives a regulator's notice at 9 AM. An AI system that made 40,000 consequential decisions over 18 months is under review. Her legal team asks one question: can you reconstruct what the model did, under what parameters, on any specific decision?
The policy framework is 60 pages. The ethics committee met quarterly. The risk register was updated in February.
The system log does not exist.
This is not a technology failure. The frameworks exist. The proof does not. That gap — between what governance promises and what organisations can prove — is what this research maps.
The Problem
The AI Liability Gap
Governance frameworks document intent. They demonstrate process. What they cannot do is prove that an AI system behaved within defined parameters at the moment a consequential decision was made.
Australian regulators are not asking for better frameworks. They are asking for system evidence — deterministic proof of what the AI did, not documentation of what the policy intended.
The 2026 Privacy Act reforms and ASIC enforcement precedent under Report 798 are closing the window. Process evidence will not survive the next wave of regulatory scrutiny. System evidence is what stands up.
Why This Research Matters
The Missing Reference Point
Existing frameworks — NIST, ISO 42001, OECD — tell organisations what governance should look like. None tell them where they actually stand relative to their peers when the evidence bar is raised.
Boards cannot benchmark themselves. CISOs cannot quantify the gap upward. GCs cannot advise on reasonable steps without knowing what steps peers are taking.
The 2026 AI Liability Benchmark will produce that reference point. Most contributors will find they are not where they thought they were. That gap is precisely what the Benchmark is designed to measure.
Why This Cannot Wait
A Market Failure in Progress
The current trajectory is not sustainable. Government is drafting AI regulations on IT governance paradigms that predate agentic systems. Vendors are selling compliance theatre — frameworks that document intent but cannot prove execution.
The 2026 AI Liability Benchmark exists because 50 of the sharpest risk and governance practitioners in Australia have an opportunity to define the architectural standard before regulators impose an unworkable one. Not a survey. Not a favour. A chance to shape what reasonable governance actually looks like — before someone else does it for you.
The Central Question
What the Research Is Asking
The question driving the research
"When your organisation's AI system makes a consequential decision — a credit assessment, a staffing recommendation, a triage outcome — what evidence exists that would allow an independent third party to reconstruct exactly what happened, under what authority, and why?"
If your answer takes longer than 30 seconds, the gap is already present.
Research Design
How It Works
Who
50 senior Australian practitioners — board directors, CISOs, GCs, CFOs, and risk leads. Closed group. Curated for practitioner depth, not institutional name.
What
One unrecorded structured conversation of approximately 20 minutes. Your honest assessment of where the real governance gaps sit in your environment. No institutional detail required. Nothing identifying published.
In Return
The anonymised benchmark data before it reaches the press, before it reaches your regulator, before it reaches your peers. Yours to take into your next board or risk committee meeting.
Who Contributes
The Kind of Practitioner This Research Needs
The practitioners contributing to this research share one characteristic. They are not waiting for a regulator to define the standard for them. They are building the reference point that others will use.
This is not a survey. It is a closed peer conversation about a problem that most organisations have not yet named clearly. The benchmark data is only as rigorous as the practitioners behind it.
Principal Investigator
About SAM B
"I have spent 25 years building technology companies — three exits, two failures, one I would rather not revisit. The governance failures this research maps are not theoretical. I have been in the room when they happened. That is why I am running this research, not just writing about it."
PhD Candidate, Responsible AI & Algorithmic Liability — UTS Data Science Institute. Research focus: the gap between AI governance frameworks and what organisations can prove at the system level.
Founding Principal, CausalShield — runtime causal verification for AI systems. Developed at UTS DSI, deployed in Digital Health. The infrastructure that turns governance intent into system evidence.
Principal, AI Decoded — AI governance advisory practice serving Audit & Risk Committees at ASX mid-market and enterprise firms.
Where Most Organisations Sit
The Governance Evidence Map
Regulatory exposure vs. evidence depth — Australian mid-market
Low Evidence
High Evidence
Low Exposure
High Exposure
Exposed & Undefended
Exposed & Defensible
Low Stakes
Well Governed
← Most orgs
Illustrative — based on advisory observations and pre-research conversations. Index will map actual positions across 50 practitioners.
One Decision. Two Options.
Will you help define the standard —
or let others define it for you?
Contributors are asked for approximately 20 minutes — one unrecorded, structured conversation about where the real governance evidence gaps sit in your environment. No institutional detail required. Nothing identifying published. The research group is curated, not open-access.
In return: the anonymised benchmark data before it reaches the press, before it reaches your regulator, before it reaches your peers. You will know exactly how your governance evidence readiness compares to 49 of your direct peers. That is a reference point no board has had access to before.
Contributions collected through Q2 2026 · Research group capped at 50 practitioners · AI Decoded
Independent, non-commercial benchmark · AI Decoded · 2026