top of page

AI in Complaints and Disputes: The New Reality

  • Writer: Shiv  Martin
    Shiv Martin
  • 3 hours ago
  • 13 min read

If you work in complaints or dispute resolution, here’s what you need to know today about how AI is impacting your work

Context and purpose

Artificial intelligence is now routinely used by parties engaging with complaints and dispute resolution systems, and increasingly by institutions themselves. Its use raises practical questions about accessibility, efficiency, procedural fairness, evidence integrity, and public confidence.


This paper does not seek to promote or prohibit the use of AI. Its purpose is to describe how AI is currently being used in complaints and dispute resolution contexts, to identify its strengths and limitations, and to outline the challenges these developments present for courts, tribunals, regulators, and oversight bodies.


Why this matters now

Across my Community of Practice discussions, one theme keeps resurfacing: navigating AI is quickly becoming one of the biggest operational and fairness challenges for complaints and dispute resolution teams. People are using AI to draft complaints and submissions, summarise evidence, translate narratives, and generate “legal research” (including citations) and institutions are responding with new guidance, practice directions, and governance frameworks to manage accuracy, privacy, and integrity risks. 


Current uses of AI in complaints and dispute resolution systems

Use by complainants and parties

Parties commonly use commercially available AI tools to:

  • draft or refine complaints, submissions, and correspondence

  • structure narratives into timelines or issue lists

  • summarise long document sets or email chains

  • translate material or convert it into plain language

  • generate explanations of legal concepts, procedural steps, or potential remedies


In many cases, AI is being used as a form of accessibility support. Individuals may turn to AI where they feel overwhelmed by process complexity, lack confidence in written communication, or experience barriers related to language, literacy, disability, or time constraints.


Use by institutions

Institutions are also exploring or implementing AI-enabled tools, including:

  • automated intake and triage support

  • document classification and prioritisation

  • summarisation of large volumes of material

  • drafting assistance for routine correspondence or internal notes

  • website chat tools that provide general procedural information


While these uses vary significantly in sophistication and risk profile, they share a common objective: managing volume, improving consistency, and supporting timely resolution.


AI in Complaints and Disputes: The New Reality

The challenges we’re seeing tips to respond to these challenges

1) Too much information (and not always the right information)

AI can make complaints and submissions more structured and readable, but it can also increase volume: longer narratives, more annexures, more confident “legal framing”, and greater repetition. That creates a predictable downstream impact: more to sift, more to verify, and more time spent clarifying what is actually in dispute.


In practice, the “time cost” is not just reading. It’s the hidden work of:

  • checking whether the issue is in jurisdiction,

  • separating relevant facts from AI-generated filler,

  • verifying whether authorities and quotations exist,

  • correcting misunderstandings created by confident-but-wrong outputs. 


What to do about it (practical steps):
  1. Name it early. Let your stakeholders know that AI-generated submissions can be polished but unreliable, and that verification may be required.

  2. Ask direct questions. Use specific prompts to extract the key facts you need (what happened, when, who was involved, what outcome is being sought, and what evidence supports each claim).

  3. Design your forms for clarity, not storytelling. Where possible, segment information into relevant fields rather than inviting large slabs of free text. This reduces unnecessary volume and makes triage easier.

  4. Pick up the phone. Use early calls or short case conferences to quickly identify what matters, correct misunderstandings, and reset expectations before the paper trail grows.


2) Mismanaged expectations (about speed, remedies, and certainty)

AI tools can create certainty in tone even when the underlying advice is conditional, incomplete, or wrong. This fuels mismanaged expectations: parties may believe a process is faster than it is, that particular outcomes are guaranteed, or that “the law clearly says” something it does not. 


That expectation gap is not harmless. It increases repeat contact, escalation, “complaints about the complaints process”, and the emotional load on staff, especially where parties feel they have already been “clear” because their AI-produced submission looks polished.


What to do about it?

The practical solution is not to tell people to stop using AI. It is to strengthen your early clarity resources so that AI doesn’t become the default “front door” into your system.


  1. Invest in the first response. Consider an auto-response to new complaints that includes a short video (60–90 seconds) or a simple FAQ sheet explaining your organisation’s role, what you can and can’t do, typical timeframes, what a good complaint looks like, and what information you need to progress a matter.

  2. Assume people are using AI because they’re stuck. Many individuals turn to AI because they can’t find, interpret, or trust the information your organisation provides. If your guidance is hard to locate or hard to understand, AI will fill the gap. Fix the gap.

  3. Treat your public information as a “preventative control”. Clear, accessible, plain-language guidance reduces misdirection, lowers repeat contact, and improves the quality of incoming complaints.


Think about where AI gets its cues. Large AI platforms draw heavily on public content. Your website copy, downloadable resources, videos, and social media posts can shape what people read, repeat, and believe. Make sure your organisation is consistently publishing credible, current, plain-English information about its role and process because that content will increasingly “train” the public narrative around what you do.


The benefits are real that’s why AI is being used

Before we jump into the benefits (and the risks), it’s worth taking a deliberate pause and asking a simple question: why would someone use AI at all when they’re trying to make a complaint or engage in a dispute process?


If you’re a practitioner, the easiest way to answer that is to start with yourself.

Why do you use AI (or why would you be tempted to)?

Most people don’t reach for AI because they want to game the system. They use it because it helps with very human problems:

  • They’re overwhelmed. They don’t know where to start, or what matters, or how to explain it clearly.

  • They want to be taken seriously. They worry their writing will sound “emotional”, “messy”, or unprofessional, so they try to make it more formal.

  • They don’t have the language. English may be an additional language, or they may struggle with literacy, neurodiversity, fatigue, or stress.

  • They’re time-poor. They’re doing this between work shifts, caring responsibilities, or while unwell.

  • They’re trying to reduce conflict. AI can help them write something more controlled and less reactive than what they feel in the moment.

  • They’re seeking certainty. Dispute systems can feel complex and intimidating. AI offers quick answers and a sense of control, even when the answers aren’t perfect.

  • They can’t find (or can’t understand) your resources. If your process information is hard to locate, dense, or full of jargon, people will outsource their understanding to whatever feels easiest.


This is the point: AI use is often a signal not of bad faith, but of friction. Friction in the system.  If we can hold that lens, we’re better placed to engage with the benefits in a balanced way and to design processes that reduce the need for people to rely on AI as their primary guide.


Used well, AI can make it easier for people to participate in complaints and dispute processes. The key is to be clear-eyed: AI is not a substitute for truth, evidence, or jurisdiction but it can be a practical support tool for communication.


With the right guidance, AI can help stakeholders to:

  • Get started. Turn a messy set of thoughts into a first draft, outline, or timeline.

  • Structure information. Present events in date order, separate issues, and identify the outcome being sought.

  • Improve readability. Fix grammar, spelling, and formatting so the complaint is easier to understand.

  • Translate and simplify. Support people who use English as an additional language, or who need plain-language explanations.

  • Summarise and organise. Condense long email trails or documents into key points (as long as the original material is retained and checked).

  • Reduce reactivity. Help a person rewrite an angry or distressed draft into something calmer and more professional.

  • Prepare for conversations. Generate a list of questions they want to ask, or a short statement of what they want the other party or the organisation to understand.



Weaknesses and risks associated with AI use

Accuracy and reliability concerns

AI systems do not distinguish between verified and unverified information. They may generate content that is plausible but incorrect, including:

  • inaccurate factual assertions

  • misstatements of legal principles

  • fabricated or misattributed authorities

  • overly confident conclusions unsupported by evidence

This presents particular risks in legal and quasi-legal settings, where accuracy is foundational to procedural fairness.


Inflation of volume without proportional value

AI can significantly increase the length and complexity of submissions without improving substantive clarity. Longer documents, repeated framing, and unnecessary annexures increase review time and verification burdens, potentially delaying resolution.


Distorted expectations

AI-generated material often conveys certainty in tone, even where legal outcomes are discretionary or context-specific. This can create unrealistic expectations about speed, remedies, or entitlement, leading to frustration, escalation, and repeat contact.


Evidence integrity risks

Where AI is used to rewrite witness accounts, generate factual narratives, or summarise documents without careful checking, the integrity of evidence may be compromised. Distinguishing between original material and AI-altered content can become difficult, particularly where disclosure is incomplete.


Privacy and confidentiality concerns

Many AI tools operate on open or third-party platforms. Uploading sensitive, personal, or protected information into such systems may expose parties and institutions to privacy, confidentiality, and data security risks.


Perception of automation and loss of human judgment

Increased reliance on AI, particularly where its role is opaque, may reinforce perceptions that dispute resolution processes are automated, impersonal, or inaccessible. This can undermine trust, especially in systems that already struggle with public confidence.


System-level challenges for dispute resolution bodies

The growing use of AI presents several structural challenges:

Verification burden: Increased need to check accuracy, sources, and evidence integrity.

Expectation management: Greater effort required to correct misunderstandings created upstream.

Equity impacts: Differential AI literacy and access may advantage some users over others.

Governance complexity: Need for clear policies on acceptable use, disclosure, and safeguards.

Transparency obligations: Requirement to explain if and how AI influences triage, prioritisation, or outcomes.


Implications for judicial and regulatory practice

AI does not alter core legal and administrative principles. Human decision-makers remain responsible for:

  • assessing credibility and evidence

  • applying law and policy

  • ensuring procedural fairness

  • protecting confidentiality and safety

  • providing reasons that are intelligible and contestable


What AI does change is the environment in which those responsibilities are exercised. As AI use increases, the importance of early clarification, clear communication, and visible human judgment also increases.



Guidance you can give (so the benefits don’t create new risk)

A robot helping hand
“It’s better to have a clear process to work within than no structure at all and risk things falling apart."

If your organisation is willing to acknowledge AI use openly, a simple set of guardrails can protect both parties and the process:


Encourage AI for:

  • structure, formatting, and clarity

  • translation and plain-language rewriting

  • creating timelines, headings, and issue lists


Warn against AI for:

  • uploading sensitive, confidential, or protected material into open tools, ensure parties satisfy themselves with the privacy setting in the tools they are using and only upload the information necessary.

  • generating facts, evidence, or quotations

  • inventing case law or “legal rules”

  • rewriting witness evidence as if it were the person’s own recollection


Always require:

  • the person checks accuracy against original documents

  • the person keeps originals (emails, documents, notes)

  • the person is clear about what outcome they are seeking and why


In other words, AI can be an accessibility support but only if we pair it with good guidance and safe design. Without that, the same tool that helps one person participate can mislead another, create privacy risks, or increase workload through inaccuracies and over-volume.


Why this matters for complaint systems: if we respond with blanket prohibition or moral panic, we risk undermining accessibility gains and pushing people back into the very “computer says no” experiences that have eroded trust for decades.



A principles-based framework for AI-ready dispute resolution

If your team is trying to work out “what good looks like” in practice, start with principles rather than tools. Here are four that hold up across complaints, conciliation, tribunals and courts:

1) Human dignity and voicePeople need to feel heard by a real person, especially when stakes are high. AI must not become a barrier between a person and a decision-maker or dispute resolver.

2) Procedural fairness and explainabilityParties should understand what is happening, why it is happening, and how they can respond. If AI influences triage, prioritisation, or outcomes, it must be transparent and contestable.

3) Accuracy and evidence integrityAI can be helpful for structure and clarity, but it is not a reliable fact source. The integrity of evidence and legal references must be protected through verification and clear boundaries.

4) Privacy, confidentiality and safety by designComplaints often contain sensitive information. Systems must reduce the risk of inappropriate disclosure, data leakage, or unsafe handling of protected material.


A practical Checklist for complaints and dispute resolution teams

What complaints and dispute resolution teams can do now


1) Get human early

Principle: dignity, voice, and early clarity reduce downstream harm.

  • Short, early human contact (a phone call or short online case conference) can prevent weeks of back-and-forth.

  • Use it to:

    • confirm what the issue actually is (and what it is not)

    • reset expectations about role, jurisdiction and remedies

    • explain evidence requirements and what will be persuasive

    • triage urgency and safety issues

    • reduce repeat contact driven by AI-fuelled certainty


A practical prompt for teams is: how can we encourage human contact from our stakeholders in how we communicate with them?


2) Build AI literacy as a core practice skill

Principle: proportional response depends on recognition.

Teams don’t need to be technical specialists, but they do need baseline literacy so they can respond fairly and efficiently. Train staff to recognise:

  • sudden shifts in writing style or tone

  • over-structured legalistic framing that doesn’t match the person’s earlier communication

  • suspicious citations or confident claims that “the law clearly says…”

  • where hallucinations typically appear (authorities, quotes, absolute legal statements)

  • “Polish is not proof.” Good formatting doesn’t equal accuracy.


3) Guide stakeholders on safe and effective use

Principle: accessibility improves when guidance is practical, not just cautionary.

Instead of generic warnings, give people a clear “green/amber/red” guide.

Green (generally safe):

A traffic light guide  for complaints and dispute resolution teams
  • improving structure, headings, and readability

  • translating into English or plain language

  • creating a timeline or issue list

  • preparing questions for a call or conference

Amber (use with care):

  • summarising long documents (must check against originals)

  • drafting a complaint narrative (must ensure facts are correct)

  • suggesting next steps (must check role/jurisdiction)

Red (high risk / not appropriate):

  • generating facts, evidence, or “what happened”

  • inventing legal authorities or quoting cases without verification

  • rewriting witness evidence as if it is the person’s recollection

  • uploading sensitive or protected material into open tools

This kind of guidance reduces misdirection and improves the quality of what comes in.


4) Set guardrails that protect timeliness and justice

Principle: governance should match risk.

At minimum, policies and workflows should include:

  • verification gates for citation-heavy or quote-heavy submissions

  • evidence integrity boundaries (including “no AI rewriting” for witness material)

  • privacy/confidentiality controls (approved tools, training, and clear do-not-upload rules)

  • transparency and contest-ability if AI influences triage or prioritisation



If your team is grappling with AI-generated complaints, mismanaged expectations, or growing pressure on fairness and trust, I support complaints bodies, regulators, tribunals and Ombudsman offices through training, facilitation and system design advice.

👉 Book a confidential conversation to discuss what an AI-ready, human-centred response could look like for your organisation.



Conclusion - As AI rises, it’s time to get more human

Artificial intelligence is now embedded in the reality of complaints and dispute resolution systems. It offers genuine benefits in accessibility, structure, and efficiency, but also introduces significant risks relating to accuracy, volume, expectations and trust.

The appropriate response is neither prohibition nor uncritical adoption. Courts, tribunals, regulators, and oversight bodies should adopt a measured, principles-based approach that recognises current uses of AI, mitigates its weaknesses, and reinforces the central role of human judgment, transparency, and procedural fairness.


The more AI enters our dispute systems, the more we need to prove we are not machines. For decades, many members of the public have felt like they were communicating with computers: portals, scripted responses, standard letters, and processes that do not listen. Now AI risks deepening that experience unless we deliberately respond in a different direction.


For the past 20 years of my career, I’ve been saying: pick up the phone. Now, more than ever, that message matters. Why? Because in an AI-shaped environment:

  1. the signal-to-noise ratio is lower (more words, less clarity),

  2. expectations are higher (more confidence, more certainty),

  3. misunderstandings spread faster (hallucinated law and misdirection),

  4. and trust becomes more fragile (people assume “it’s all automated”). 


As AI becomes more prevalent, the legitimacy of dispute resolution systems will depend not on technological sophistication, but on the clarity, care, and humanity with which those systems respond.



If you’d like to explore the human side of this challenge further, especially how to respond when pressure, conflict and uncertainty escalate, this short video from my channel may be helpful to you.

References for further reading


Shiv Martin is a nationally accredited mediator, practicing solicitor, conciliator, decision-maker, and certified vocational trainer.

Hi, I’m Shiv Martin. I’m a nationally accredited mediator, lawyer, conciliator, and conflict management specialist with over a decade of experience working across government, business, and community settings. I support teams to navigate complex and emotionally charged situations through mediation and conciliation, conflict skills training, facilitation, and practical advice on policies and processes. My approach is grounded in law, psychology, and real-world dispute resolution, with a strong focus on clarity, fairness, and workable outcomes.


If you’d like to talk about how I can help you or your organisation, you can get in touch here: 👉 Contact us




Shiv Martin Consulting offers a structured three-level professional development pathway for dispute resolution and regulatory teams.


3 levels of dispute resolution skills training

Level 1 – Core Dispute Resolution Skills: For new starters or professionals working in complaints, case management or early resolution roles.

Level 2 – Managing Challenging & High-Risk Interactions: For experienced conciliators, mediators, complaints managers and regulatory officers.

Level 3 – Community of Practice for regulatory professionals: For experienced staff committed to reflective practice and continuous improvement.


All 3 levels can be offered in-house, simply email us to discuss your specific needs and we can tailor training to suit your team.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

If this post resonated with you, join my community of mediators, HR professionals, and leaders who care about handling conflict with confidence and compassion. Subscribe  to receive new articles, free resources and updates.

Copy of JKP_7274.jpg

Subscribe to new blogs from Shiv

bottom of page