Last Updated on 13/08/2025 by Damin Murdock

As Artificial Intelligence (AI) becomes increasingly integrated into business operations, from hiring and lending to customer service and risk assessment, organisations face a new frontier of legal risk, being liability for discriminatory outcomes produced by their AI systems. In Australia, the law is clear, organisations can be held legally responsible, particularly if the discrimination is unreasonable, based on unreliable data, or if the organisation knew or should have known about the discriminatory impact.

At Leo Lawyers, we understand the complexities of AI ethics and legal compliance. We help businesses navigate these emerging challenges, ensuring their AI development aligns with Australian anti-discrimination laws and minimises exposure to liability. This article outlines the critical aspects of organisational liability for AI-driven discrimination and provides essential considerations for managing these risks.

Direct Liability for AI-Driven Discrimination

Organisations are generally liable for discriminatory actions taken by their representatives or systems acting on their behalf. In the context of AI, this means if an algorithm, acting as a “representative” of the organisation, produces a discriminatory outcome, the organisation can be held accountable. Crucially, the motive for discrimination is irrelevant and what matters is whether the treatment results in less favorable outcomes for protected groups, such as those based on race, gender, age, or disability.

Statistical and Data-Based Discrimination: Permissibility and Pitfalls

AI systems often make decisions based on statistical patterns within large datasets. While this can lead to efficiency, it also carries inherent risks of perpetuating or even amplifying existing societal biases. Despite these risks, in limited circumstances, discrimination might be permissible if it is based on reliable actuarial or statistical data. Such discrimination must also be reasonable, having regard to the data and other relevant factors, and applied only when no reasonable alternative data is available.

Requirements for Data Validity

For data to be considered valid and relied upon to justify a potentially discriminatory outcome, it must meet a strict criteria. It must originate from a source that is reasonable to rely upon, be current and not discredited, be directly applicable to the particular decision being made, and its sample size must be sufficient to ensure reliable use and accurate statistical inference.

Proving Discriminatory Impact

Demonstrating that an AI system has a discriminatory impact can be achieved through various forms of evidence. This includes statistical disparities that show patterns of exclusion or harm, concrete proof of disproportionate impacts on protected groups, and evidence about the specific circumstances of the affected group.

 

Managing AI Discrimination Risks: Essential Considerations for Businesses

Proactive measures are important for organisations deploying AI to mitigate the risk of legal liability for discriminatory outcomes.

    1. Preventive Measures: Organisations should regularly audit their AI systems for bias, document the statistical basis for any decisions made by AI and continuously monitor outcomes for protected groups to identify and address potential disparities.
    2. Risk Factors for Increased Liability: Liability for AI-driven discrimination may increase significantly if the organisation knew about or should have detected the bias, failed to take corrective action when bias was identified, or used outdated or unreliable data in training or deploying its AI systems.
    3. Understanding Limitations: It is important to acknowledge that there is no universal measure for what level of statistical disparity definitively constitutes discrimination. Statistical evidence alone may have shortcomings and should ideally be supported by qualitative evidence to build a comprehensive case. Furthermore, past discriminatory practices can influence current data, and organisations must account for these historical biases to prevent their perpetuation by AI systems.
  • Documentation: Maintaining clear and comprehensive records is paramount. Organisations should document their data sources and validation processes, the decision-making logic of their AI systems, and all monitoring and correction efforts undertaken to address potential biases.

Conclusion

The deployment of AI systems brings with it a clear legal responsibility for organisations to prevent discriminatory outcomes. Australian law provides avenues for holding entities liable if their AI systems produce unreasonable discrimination, particularly when based on flawed data or when bias is known but unaddressed. Proactive measures, including regular audits, rigorous data validation, and meticulous documentation, are essential for mitigating these risks.

At Leo Lawyers, we provide expert legal advice to businesses on navigating the continuously evolving intersection of AI and law. Feel free to contact Damin Murdock at Leo Lawyers via our website, on (02) 8201 0051 or at office@leolawyers.com.au. Further, if you liked this article, please subscribe to newsletter via our Website, and subscribe to our YouTube , LinkedIn, Facebook and Instagram. If you liked this article or video, please also give us a favourable Google Review.

DISCLAIMER: This is not legal advice and is general information only. You should not rely upon the information contained in this article and if you require specific legal advice, please contact us.

+ posts

Damin Murdock (J.D | LL.M | BACS - Finance) is a seasoned commercial lawyer with over 17 years of experience, recognised as a trusted legal advisor and courtroom advocate who has built a formidable reputation for delivering strategic legal solutions across corporate, commercial, construction, and technology law. He has held senior leadership positions, including director of a national Australian law firm, principal lawyer of MurdockCheng Legal Practice, and Chief Legal Officer of Lawpath, Australia's largest legal technology platform. Throughout his career, Damin has personally advised more than 2,000 startups and SMEs, earning over 300 five-star reviews from satisfied clients who value his clear communication, commercial pragmatism, and in-depth legal knowledge. As an established legal thought leader, he has hosted over 100 webinars and legal videos that have attracted tens of thousands of views, reinforcing his trusted authority in both legal and business communities."