Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

3. Benefits and risks of AI

Overview

• The use of AI in Victoria’s courts and tribunals could result in benefits, risks and challenges.

• These risks and benefits will be different depending on the type of AI system and how it is used. For example, the risks and opportunities of generative AI are different to those of expert systems.

• Risks and benefits will change depending on whether an AI system is applied to a scenario it was designed and tested for, and whether it is used for a proper and legitimate purpose.

• This section sets out the overarching benefits and risks of AI. Part B will discuss specific applications in courts and tribunals and will draw out where benefits and risks might differ based on their use and application.

Potential benefits of AI

3.1 AI offers potential benefits:

• Efficiency through the sustainable administration of justice, and by reducing public and private costs.

• Access to justice including improved access to legal advice and representation. This might help people understand and participate in resolving legal problems in new ways. Disputes might be solved through alternative processes, rather than through a court or tribunal.

• Improved quality of processes and outcomes made possible by AI technologies through new and innovative approaches.

Efficiency

3.2 AI may help courts, tribunals and the legal profession improve their efficiency and cost effectiveness.[1]

3.3 AI systems can process huge amounts of data and provide analysis, new content or decisions much faster than humans. It can summarise legal cases or documents much faster than a human researcher.[2] AI tools are already used by Australian legal professionals for a variety of legal tasks, including legal research, writing emails and analysing and summarising documents.[3]

3.4 There may be opportunities to reduce the time taken to deliver tasks that involve a high degree of repetition and little discretion, such as by automating some administrative and case management processes.[4] This is significant given that only a small proportion of cases involving a higher court proceed to a hearing before a judge.[5]

3.5 AI offers efficiencies but there also costs related to procuring and managing new technology. The adoption of AI systems will require financial investment in buying software, training staff, technical adjustments to improve performance, accuracy and reliability, and costs associated with maintenance and monitoring.

3.6 AI can only increase cost-effectiveness if the risks of using it — such as inaccuracy

and bias—can be mitigated. Importantly, efficiency needs to be balanced with principles of trust, fairness and other fundamental values that underpin the justice system (see Part C).

Access to justice

3.7 AI may improve access to justice by providing new, more effective ways for people to resolve their legal problems.

3.8 AI could help people to avoid or contain legal disputes by promoting access to legal information.[6] In The Justice Project report, the Law Council of Australia noted that access to justice extends beyond the formal justice system to include access to ‘legal information and education, non-court based dispute resolution and law reform’.[7]

3.9 AI may help people to understand their legal problems and access timely advice and representation. Chatbots and intake tools are designed to help people understand and categorise legal problems, then connect them with appropriate legal advice or representation.[8] AI can also provide alternative methods to resolve disputes, reducing the number of matters that proceed to court.[9]

3.10 Where AI is integrated into hearings through real-time captions or translations, it may help some court users engage more easily with proceedings. AI may even assist self-represented litigants to engage with the court system.[10] (See Part B).

Innovative approaches to improve outcomes and processes

3.11 AI creates unique opportunities for innovation that arise from the pace of technological change.[11] ChatGPT reached 100 million monthly active users just two months after launch.[12] In 2023, 149 foundation AI models (like ChatGPT) were released, more than doubling the number in 2022.[13] AI patents increased by over 60 per cent between 2021 and 2022.[14]

3.12 The legal sector is adopting AI at rapidly increasing rates. A recent survey of 560 Australian lawyers found that half of respondents use generative AI for legal tasks.[15] The use of AI by lawyers will increasingly impact our courts, particularly in the production of evidence (see Part B). Courts may benefit from innovative uses of AI in a range of ways. One possibility is to reduce vicarious trauma for court staff by minimising exposure to large amounts of distressing material (see Part B).

3.13 Victorian courts and tribunals have an opportunity to strategically plan systemic improvements, rather than react to change. In the United Kingdom, Her Majesty’s Courts & Tribunal Service reform programme deployed over 50 projects to support digital uses by the courts.[16] Likewise, the New Zealand Chief Justice has considered implementing AI as part of its Digital Strategy for Courts and Tribunals.[17]

Potential risks of AI

3.14 We have considered opportunities for the legal system to benefit from AI. Careful consideration of its limitations and risks can help us to understand where and how these opportunities might be best realised.

3.15 Key risks and limitations discussed below draw on examples identified by the Australasian Institute of Judicial Administration[18] and overseas jurisdictions. These include:

• Data security and privacy concerns

• Explainability and the opacity of AI technology

• Bias including data bias and reinforcing bias

• Inaccuracy including hallucinations (fictional outputs) and deepfakes

• Reduced quality of outputs and devaluing of human judgement

• Access to justice challenges.

3.16 Other issues might include:

• deskilling justice professionals

• impacts on judicial independence

• harm to public trust in judges and courts

• ‘truth decay’ caused by a decline of trust in legal decisions.[19]

Data insecurity and loss of privacy

3.17 Significant privacy and data security risks arise from the way AI systems access

and use data.[20] Many AI technologies are trained and tested on large data sets. Automated decision-making systems usually collect, use and store personal and organisational data.

3.18 Privacy risks depend on the ways an AI system uses personal information and whether the data collected is secure. Personal information should only be used for the purposes for which it was collected, and for which consent was provided. But AI can use existing data to draw conclusions based on personal characteristics.[21] This may be outside the scope of its original collection and consent.

3.19 Data security and privacy issues will vary depending on whether an AI system is developed for the organisation or supplied by a third-party.[22] Many organisations and institutions use off-the-shelf AI systems, which might mean that data moves from the court or tribunal to other organisations or jurisdictions.[23] Court data may be passed onto a third party when an AI system is trained on internal court data. The host or provider may also retain information for training purposes. The provider may monitor the inputs and outputs of an AI system as part of the terms of service or its licencing requirements. Security risks will also vary depending on whether and how securely data is stored, and how long it is retained.

3.20 Malicious actors may conduct cyberattacks or may seek to exploit vulnerabilities in the underlying architecture, the operating protocols or the use of AI tools. These attacks may attempt to access and exploit personal data, disrupt court operations or simply cause enough reputational damage to reduce public trust.

Explainability and the ‘black box’

3.21 ‘Explainability’ and the ‘black box’ refer to the ability to explain how an AI system makes predictions or decisions. It is important that courts and tribunals can explain their decisions, and that people can understand and challenge how decisions are made. This has implications for important principles that guide courts and tribunals, especially:

• fairness

• natural justice

• public trust.

3.22 Two types of barriers to understanding the outputs from an AI system are set out below. These are the technical ‘black box’ and legal opacity.[24]

Technical ‘black box’

3.23 It is often difficult to explain how AI algorithms operate. Their complex architecture makes it difficult to understand how they reach answers. Even when the inputs are known, it can be unclear how the AI model transforms this into an output.[25] This is a larger problem when a deep learning model is involved, as it draws on vast amounts of data through a layered, complex structure. Outputs can emerge from calculations that are not reviewable, and even developers may not fully understand what is going on.[26]

3.24 There is a growing field of ‘explainable AI’ which attempts to bridge the gap between machine-based decision-making and the need for human comprehension.[27] For now, the technical black box remains problematic.

Legal opacity

3.25 AI may not be explainable when developers seek to protect their interest in the intellectual property of their technology. Owners and developers may not want to explain how an AI model works and may seek legal protection to maintain commercial advantage.[28] In the United States, a risk assessment tool was developed to assess how likely it was that convicted offenders would reoffend. The company that developed it refused to explain to a court the underlying method, on the basis that it was proprietary.[29] This kind of opacity has been described as a ‘legal black box’.[30]

3.26 Some academics have proposed that the laws of negligence, product liability and other torts could be used to overcome the issue of legal opacity in AI.[31]

3.27 ACCC v Trivago provides an example of how the Federal Court has managed issues relating to commercial sensitivity and algorithmic outputs.[32] In response to commercial sensitivity about the algorithm and data held by Trivago, the court ordered concurrent evidence from two experts in closed court.

Bias

3.28 AI can sometimes result in bias against groups of people. If this happens in courts and tribunals it has a fundamental impact on principles that are central to the justice system, especially fairness and non-discrimination.

3.29 AI may assist in reducing or identifying human bias in some circumstances.[33] Human decision-making is not immune from conscious or unconscious bias. But bias in AI technology raises important concerns. There are two main types of bias:

• data bias

• system bias.

Data bias

3.30 Bias can occur due to:

• underlying bias in the data that an AI system is trained on

• the use of selective or unrepresentative data.

3.31 If the model is built and tested using biased data, such as patterns of inequality, discrimination or prejudice, this bias will be replicated in the outputs.[34] This is a critical issue for courts, where fairness and non-discrimination is fundamental.

3.32 The risk of bias is also a concern because some groups are overrepresented in Victoria’s justice system. First Nations people account for 12.6 per cent of all prisoners in Victoria,[35] but only 1.2 per cent of the Victorian population.[36] Using data with overrepresented groups could reinforce assumptions based on race or other discriminatory factors.

3.33 Bias from AI can arise in risk assessment or other automated decision-making tools. Risk assessment tools that draw on data related to offences where certain groups are overrepresented may result in the AI system overemphasising the risk of recidivism for those groups. Even if the input is not intentionally discriminatory, the AI system may correlate other inputs that result in bias.[37] For example, information about a person’s education, socio-economic background and geographic location may act as a proxy for race.

3.34 Bias due to the limitations of training data has been a common criticism of facial recognition technology, with problems reported about misidentification and inaccuracy with some ethnic groups.[38]

3.35 Human bias in the design of AI systems is also a risk. Assumptions may be built into the AI model. This can occur during the development stage, such as in the choice of input and output variables, or during the training stage.[39]

System bias

3.36 Machine learning has the risk of reinforcing the bias in its future learning and application of data. An algorithmic risk assessment tool that overemphasises the risk of reoffending for a particular group may lead to people from that group being incarcerated at a higher rate or for longer.[40] Where the tool uses machine learning, there is a risk that bias will be amplified over time, as the model continues to learn or re-train on new data. The system might also create new biases by finding new statistical relationships about certain groups and applying them to predictions or decisions.[41]

Inaccuracy

3.37 AI can provide complex analysis of large amounts of data very quickly, but there are risks relating to the accuracy of its outputs. AI rule-based or expert systems can analyse data with increased precision, which can lead to a reduction in errors.[42] But even rule-based or expert systems can generate inaccuracies. This can happen when the data an AI model is trained on is too narrow and the model is unable to meaningfully adapt to new inputs. Generative AI poses different inaccuracy risks, because it can produce hallucinations, as we discuss later.

3.38 It is not always obvious when AI is wrong. Inaccurate outputs may appear to be true and correct. The risk of inaccuracy is amplified because people are less likely to check the results. This risk is particularly pronounced with generative AI systems that tend towards affirmative responses and express an overconfidence in outputs and results.[43]

3.39 Inaccuracies arising from the use of AI by courts and tribunals could lead to unfairness and a loss of public trust. Even errors in administrative functions can have considerable implications. The risk of error in decision-making involving people’s rights or liberty can have significant consequences. These errors undermine confidence in the justice system, whether the decision is made by a human or machine. Aspects of AI that raise issues of accuracy are further discussed below.

Examples of errors by legal professionals using generative AI for court submissions are increasingly common. In Zhang v Chen, counsel filed a notice of application containing non-existent legal authorities, which had been made-up (‘hallucinated’) by ChatGPT.[44]

In another example, an administrative error by pre-approved divorce software used in family courts in England and Wales required 2,235 cases to be re-opened and resubmitted.[45]

Data quality

3.40 AI systems are trained on huge quantities of data. Many are trained on public data ‘scraped’ from the internet, where there is no control for accuracy or bias. If the data is inaccurate, unreliable, incomplete or skewed, then the system will reflect these inaccuracies in its content or decisions.[46] Legal journals often sit behind paywalls, meaning that non-specialised AI systems are unlikely to be trained on scholarly reviewed articles or information specific to the legal sector. Another risk is that training data might be focused on other jurisdictions, rather than Australian case law and legislation.

AI systems trained on primarily United States case law may produce answers based on patterns identified in American text and use terms and concepts that are not applicable in the Australian context.[47]

Hallucinations

3.41 Generative AI can create entirely wrong or fictional outputs, often described as hallucinations. Hallucinations are more common with general purpose large language models but can also occur in domain-specific applications. In a recent study, open domain AI was found to have a high rate of hallucinations when performing legal tasks.[48] The likelihood of hallucinations varied across the complexity of the task, types of cases, court jurisdictions and the year the case was decided. Large language models performed more accurately when performing simple tasks based on more prominent cases from higher level courts.[49]

3.42 Large language models in particular can hallucinate in different ways. They may produce a response that is inaccurate or in conflict with the prompt. This can lead to inaccuracy in summarising judicial opinion or extracting irrelevant case material. Large language models can also produce a response that is convincing but entirely wrong or fictional. Several examples from overseas have involved barristers unintentionally relying on non-existent case law, generated by AI.[50]

3.43 The risk for courts and tribunals was outlined in one United States case where counsel relied on a non-existent case generated by ChatGPT:

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavours. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.[51]

Reducing hallucinations

3.44 Some domain-specific applications have introduced ‘retrieval-augmented generation’ to try and reduce hallucinations. This allows large language models to use domain-specific data to produce answers linked to source material. To summarise a legal case, a retrieval-augmented generation system would first look up the case name in a legal database, retrieve the relevant metadata, and then provide that to the large language model to respond to the user prompt.[52] This has the benefit of drawing on higher quality, jurisdiction-specific data. This approach can reduce the risk of error, but not completely eliminate it.

3.45 Some legal technology providers claim that retrieval-augmented generation largely prevents hallucination in legal tasks.[53] While hallucinations are reduced relative to general purpose applications (such as ChatGPT), a recent study found hallucinations can still occur.[54] The study noted that retrieval-augmented generation is ‘unlikely to fully solve the hallucination problem’.[55]

Deepfakes

3.46 While the court’s role in identifying forgeries and determining the authenticity of evidence is not new, deepfakes present new challenges. A deepfake is any form of media ‘that has been altered or entirely or partially created from scratch’.[56]

3.47 Generative AI can be used to create deepfakes that are difficult to detect.[57] AI technology makes it easy for individuals to create a multilayered and complex web of false information. For example, AI can be used to create:

a fake video, fake websites that host the video and generate disinformation and misinformation about what is displayed in the video, fake Twitter accounts that link to the video, fake accounts on discussion forums that discuss the content of the fake video.[58]

3.48 Deepfakes raise evidentiary issues for courts and tribunals where they are tendered as evidence in the form of manipulated videos, images, audio or text.[59] Courts and tribunals will need to determine matters concerning deepfake material. For example, there may be a copyright dispute that turns on a deepfake image.[60]

3.49 The proliferation of deepfakes may lead to suspicion and dispute of evidence that is actually authentic.[61] International collaborations are under way to increase trust in the legitimacy of genuine audio-visual content, including content relied upon by police and prosecutorial bodies.[62]

A falsified audio recording in which a parent was heard to threaten their child was submitted in a custody dispute in the United Kingdom. The deepfake was detected in that case. But it is clear the risk of deepfakes emerging in evidence applies to other jurisdictions.[63]

Reduced quality of outputs and devaluing of human judgement

3.50 Faster and cheaper processes do not necessarily result in better outcomes. AI systems present opportunities to increase efficiency and productivity but can reduce quality. Increasing use of AI can also undervalue human judgement, replacing processes enhanced by human insight with more generic outcomes.

3.51 Legg and Bell note that AI might replace some mechanistic aspects of a lawyer’s role but ‘legal skills that draw on the lawyer’s humanity and ethics, and which AI cannot provide, will be more sought after and more valuable’.[64] They argue that legal professional judgement will become increasingly important for several reasons. AI systems cannot reason in context, do not provide reasons and explanations for outputs, and are not bound by the legal and ethical obligations owed by lawyers.[65]

3.52 If an automated AI system conducts client interviews instead of a lawyer, the questions it asks may be standardised and not consider the context or the individual circumstances of the client.[66] The AI system may not be able to apply contextual judgement to determine if the client is withholding information.[67] While an AI may process a higher volume of intake assessments than a human, this may not result in the best outcome for the client.

3.53 It is important to recognise the value of human judgement in judicial matters. For instance, therapeutic or restorative justice approaches use human insight and discretion to ensure meaningful opportunities to participate in the legal process, and to guide sentencing towards positive behavioural change.[68]

Access to justice challenges

3.54 There are concerns that AI could make existing barriers to justice worse. The risk of bias could entrench marginalisation for some groups. Increased use of technology can also create additional barriers for court users with poor digital skills, limited access to technology or low levels of literacy.[69] These issues are discussed further in the principles section of Part C.

3.55 The cost of technology raises issues about access to justice. AI offers the opportunity to access legal advice more quickly and at lower cost. But companies that own proprietary data (such as legal databases) might dominate the market for specialised AI tools. The tools may then become unaffordable for small organisations or individuals. These factors may increase inequities related to cost and access to legal advice.

3.56 There are concerns that ‘a focus on the cheap and quick resolution of disputes will come at the cost of just outcomes’.[70] The centrality of fairness as a principle of justice is considered in Part C.

Judicial independence and trust in the courts

3.57 Specific risks to principles of justice are raised by the use of AI in courts and tribunals, including principles of judicial independence and public trust. A spectrum of potential AI uses exist across courts and tribunals, ranging from assistance with court administration to judicial decision-making. At the extreme end, there has been speculation that AI may replace the role of judges.

3.58 Chief Justice Allsop has described courts as involving ‘human reasoning and emotion, and that the courts are humane’.[71] The human element of decision-making is an important factor in maintaining trust in courts. Many ethical questions arise about using machines as judges and whether they can possess ‘the rational and emotional authority to make decisions in place of a human judge’.[72] We consider these issues further in Part C.

Questions

3. What are the most significant benefits and risks for the use of AI by

a. Victorian courts and tribunals?

b. legal professionals and prosecutorial bodies?

c. the public including court users, self-represented litigants and witnesses?

4. Are there additional risks and benefits that have not been raised in this issues paper? What are they and why are they important?


  1. Efficiency can refer to reducing costs for services or to better allocating resources for the most appropriate purpose. Cost-effectiveness involves a range of considerations including efficiency in the provision of services. For further discussion of these terms, see Australian Government, On Efficiency and Effectiveness: Some Definitions (Staff Research Note, Productivity Commission, May 2013).

  2. Samuel Hodge, ‘Revolutionizing Justice: Unleashing the Power of Artificial Intelligence’ (2023) 26(2) SMU Science and Technology Law Review 217, 227.

  3. LexisNexis, Generative AI and the Future of the Legal Profession (Report, LexisNexis, 2024) 5–7.

  4. Michael Legg and Felicity Bell, ‘Artificial Intelligence and the Legal Profession: Becoming the AI-Enhanced Lawyer’ (2019) 38(2) University of Tasmania Law Review 34, 42.

  5. For example, research by Naomi Burstyner and others estimate that only 4% of the 1,890 cases filed in 2013-14 proceeded to judgment, and only 6.4% of NSW Supreme Court cases were finalised by a final order determination in 2014: Naomi Burstyner et al, ‘Why Do Some Civil Cases End up in a Full Hearing? Formulating Litigation and Process Referral Indicia through Text Analysis’ (2016) 25 Journal of Judicial Administration 257.

  6. Richard Susskind, Online Courts and the Future of Justice (Oxford University Press, 2019) 66–70.

  7. Law Council of Australia, The Justice Project (Final Report, August 2018) 48 <https://lawcouncil.au/files/web-pdf/Justice%20Project/Final%20Report/Justice%20Project%20_%20Final%20Report%20in%20full.pdf>.

  8. The Law Society of England and Wales, Technology, Access to Justice and the Rule of Law (Report, The Law Society of England and Wales, September 2019) 8.

  9. Nicola Tulk, Chris Gorst and Louisa Shanks, The Legal Access Challenge: Closing the Legal Gap through Technology Innovation (Report, Solicitors Regulation Authority, June 2020) 16.

  10. Amy Schmitz and John Zeleznikow, ‘Intelligent Legal Tech to Empower Self-Represented Litigants’ [2022] 23(1) Science and Technology Law Review 142.

  11. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Discussion Paper (Discussion Paper, June 2023) 7.

  12. Krystal Hu, ‘ChatGPT Sets Record for Fastest-Growing User Base – Analyst Note’, Reuters (online, 2 February 2023) <https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/>.

  13. Nestor Maslej et al, The AI Index 2024 Annual Report (Report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, April 2024) 30 <https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_2024_AI-Index-Report.pdf>.

  14. Ibid.

  15. LexisNexis, Generative AI and the Future of the Legal Profession (Report, LexisNexis, 2024) 5.

  16. House of Commons Justice Committee, Court and Tribunal Reforms (Second Report of Session 2019 No HC 190, 30 October 2019) 8 <https://publications.parliament.uk/pa/cm201919/cmselect/cmjust/190/190.pdf> See also; Online Dispute Resolution Advisory Group, Online Dispute Resolution for Low Value Civil Claims (Report, Civil Justice Council, February 2015); Richard Susskind, Online Courts and the Future of Justice (Oxford University Press, 2019) 95–109.

  17. Office of the Chief Justice of New Zealand, Digital Strategy for Courts and Tribunals (Report, Courts of New Zealand, March 2023) <https://www.courtsofnz.govt.nz/assets/7-Publications/2-Reports/20230329-Digital-Strategy-Report.pdf>.

  18. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023).

  19. Chief Justice Andrew Bell, ‘Truth Decay and Its Implications: An Australian Perspective’ (Speech, 4th Judicial Roundtable, Durham University, 23 April 2024) <https://supremecourt.nsw.gov.au/supreme-court-home/about-us/speeches/chief-justice.html>.

  20. Office of the Victorian Information Commissioner, Artificial Intelligence – Understanding Privacy Obligations (Report, April 2021) <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>.

  21. Ibid 4–7.

  22. For example, see privacy risks related to use of ChatGPT discussed in Privacy and Data Protection Deputy Commissioner, Use of Personal Information with ChatGPT (Public Statement, Office of the Victorian Information Commissioner, February 2024).

  23. Office of the Victorian Information Commissioner, Artificial Intelligence – Understanding Privacy Obligations (Report, April 2021) 8 <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>.

  24. Other barriers to explainability which have been identified by academics include intentional secrecy, technical illiteracy, non-intuitiveness, inherent inscrutability due to scale and poor documentation which are discussed in the following; Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1; Andrew D Selbst and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87 Fordham Law Review 1085; Henry Fraser, Rhyle Simcock and Aaron J Snoswell, ‘AI Opacity and Explainability in Tort Litigation’ (Conference Paper, FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, 21-24 June 2022) <https://dl.acm.org/doi/10.1145/3531146.3533084>.

  25. Han-Wei Liu, Ching-Fu Lin and Yu-Jie Chen, ‘Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability’ (2019) 27(2) International Journal of Law and Information Technology 122, 140; Georgios Pavlidis, ‘Unlocking the Black Box: Analysing the EU Artificial Intelligence Act’s Framework for Explainability in AI’ (2024) 16(1) Law, Innovation & Technology 293, 294–5.

  26. Georgios Pavlidis, ‘Unlocking the Black Box: Analysing the EU Artificial Intelligence Act’s Framework for Explainability in AI’ (2024) 16(1) Law, Innovation & Technology 293, 295.Innovation & Technology} 293, 295.

  27. Ibid.

  28. Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3(1) Big Data & Society 1, 3–4; Andrew D Selbst and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87 Fordham Law Review 1085, 1092.

  29. State of Wisconsin v Loomis 371 Wis.2d 235 (2016).

  30. Han-Wei Liu, Ching-Fu Lin and Yu-Jie Chen, ‘Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability’ (2019) 27(2) International Journal of Law and Information Technology 122, 138-9 .

  31. Henry Fraser, Rhyle Simcock and Aaron J Snoswell, ‘AI Opacity and Explainability in Tort Litigation’ (Conference Paper, FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, 21-24 June 2022) <https://dl.acm.org/doi/10.1145/3531146.3533084>.

  32. Trivago NV v Australian Consumer and Competition Commission [2020] FCAFC 185; (2020) 384 ALR 496, [69].

  33. Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021) 41; Law Commission of Ontario, Accountable AI (LCO Final Report, June 2022) 18.

  34. Australian Human Rights Commission, Human Rights and Technology (Final Report, 27 May 2021) 106–7 <https://humanrights.gov.au/our-work/technology-and-human-rights/projects/final-report-human-rights-and-technology>; Paul Grimm, Maura Grossman and Gordon Cormack, ‘Artificial Intelligence as Evidence’ (2021) 19(1) Northwestern Journal of Technology and Intellectual Property 9, 42.

  35. Australian Bureau of Statistics, ‘Prisoners in Australia: Aboriginal and Torres Strait Islander Prisoners’, Australian Bureau of Statistics (Web Page, 25 January 2024) <https://www.abs.gov.au/statistics/people/crime-and-justice/prisoners-australia/latest-release#aboriginal-and-torres-strait-islander-prisoners>.

  36. Australian Bureau of Statistics, ‘Estimates of Aboriginal and Torres Strait Islander Australians’, Australian Bureau of Statistics (Web Page, 31 August 2023) <https://www.abs.gov.au/statistics/people/aboriginal-and-torres-strait-islander-peoples/estimates-aboriginal-and-torres-strait-islander-australians/latest-release#:~:text=At%2030%20June%202021%2C%20there,Queensland%20and%20Western%20Australia%20combined.>.

  37. Paul Grimm, Maura Grossman and Gordon Cormack, ‘Artificial Intelligence as Evidence’ (2021) 19(1) Northwestern Journal of Technology and Intellectual Property 9, 43.

  38. Nicholas Davis, Lauren Perry and Edward Santow, Facial Recognition Technology: Towards a Model Law (Report, Human Technology Institute, The University of Technology Sydney, September 2022) 28.

  39. Centre for Data Ethics and Innovation (CDEI), Review into Bias in Algorithmic Decision-Making (Report, Centre for Data Ethics and Innovation (CDEI), November 2020) 28.

  40. Ibid 100.

  41. Ibid 26, 28.

  42. Sarah Grace, ‘Embracing the Evolution: Artificial Intelligence in Legal Practice’ [2024] (183) Precedent 30, 32.

  43. Which is a result of how the systems are trained, for example see Aaron J Snoswell and Jean Burgess, ‘The Galactica AI Model Was Trained on Scientific Knowledge – but It Spat out Alarmingly Plausible Nonsense’, The Conversation (online, 30 November 2022) <http://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445>.

  44. Zhang v Chen [2024] BCSC 285.

  45. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 58 citing, Francesco Contini, ‘Artificial Intelligence and the Transformation of Humans, Law and Technology Interactions in Judicial Proceedings’ (2020) 2(1) Law, Technology and
    Humans
    4, 9.

  46. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 16.

  47. Ibid 50.

  48. Matthew Dahl et al, ‘Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models’ (2024) 16(1) Journal of Legal Analysis 64, 81.

  49. Ibid 73-4, 76, 79

  50. Zhang v Chen [2024] BCSC 285.

  51. Mata v Avianca, Inc 678 F.Supp.3d 443 (2023), 448–9.

  52. Varun Magesh et al, ‘Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools’ (arXiv, 2024) 5–6

    <https://arxiv.org/abs/2405.20362>.

  53. Ibid 2.

  54. Varun Magesh et al, ‘Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools’ (arXiv, 2024) <https://arxiv.org/abs/2405.20362>.

  55. Ibid 6.

  56. Miriam Stankovich et al, Global Toolkit on AI and the Rule of Law for the Judiciary (Report No CI/DIT/2023/AIRoL/01, UNESCO, 2023) 20 <https://unesdoc.unesco.org/ark:/48223/pf0000387331>.

  57. Agnieszka McPeak, ‘The Threat of Deepfakes in Litigation: Raising the Authentication Bar to Combat Falsehood’ (2021) 23(2) Vanderbilt Journal of Entertainment & Technology Law 433, 438.

  58. Miriam Stankovich et al, Global Toolkit on AI and the Rule of Law for the Judiciary (Report No CI/DIT/2023/AIRoL/01, UNESCO, 2023) 121 <https://unesdoc.unesco.org/ark:/48223/pf0000387331>.

  59. Agnieszka McPeak, ‘The Threat of Deepfakes in Litigation: Raising the Authentication Bar to Combat Falsehood’ (2021) 23(2) Vanderbilt Journal of Entertainment & Technology Law 433, 438–9.

  60. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 31 <https://apo.org.au/node/327400>.

  61. Paul W Grimm, Maura R Grossman and Gordon V Cormack, ‘Artificial Intelligence as Evidence’ (2021) 19(1) Northwestern Journal of Technology and Intellectual Property 9, 73.

  62. For example, see ‘Restoring Trust and Transparency in the Age of AI’, Content Authenticity Initiative (Web Page, 2024)

    <https://contentauthenticity.org/>.

  63. Agnieszka McPeak, ‘The Threat of Deepfakes in Litigation: Raising the Authentication Bar to Combat Falsehood’ (2021) 23(2) Vanderbilt Journal of Entertainment & Technology Law 433, 438.

  64. Michael Legg and Felicity Bell, ‘Artificial Intelligence and the Legal Profession: Becoming the AI-Enhanced Lawyer’ (2019) 38(2) University of Tasmania Law Review 34, 36.

  65. Ibid 55–56.

  66. Ibid 55.

  67. Ibid.

  68. Pauline Spencer, ‘From Alternative to the New Normal: Therapeutic Jurisprudence in the Mainstream.’ (2014) 39(4) Alternative Law Journal 222, 223.

  69. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 178.

  70. Ibid 179.

  71. James Allsop, ‘Technology and the Future of Courts’ (2019) 38(1) University of Queensland Law Journal 1, 2.

  72. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 249.