Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

2. What is artificial intelligence?

Overview

• AI is a broad term which captures a range of existing and potential technologies.

• We outline AI technologies most relevant to use in courts and tribunals.

• AI has attributes that set it apart from previous technologies and computer systems. Understanding these differences may help in thinking about the opportunities and risks for using AI in courts and tribunals.

Definitions of artificial intelligence

2.1 There is no universally agreed definition of AI. This is partly because the technology continues to rapidly evolve, and so do the ways it is regulated.

2.2 A range of broad definitions are included in guidance materials developed by courts in Australia and overseas.[1]

2.3 General definitions are commonly cited from peak standards organisations including the International Organization for Standardization,[2] Standards Australia[3] and the United States National Institute of Standards and Technology.[4]

2.4 The Organisation for Economic Co-Operation and Development (OECD) has recently updated its definition for AI:

A machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.[5]

2.5 The OECD definition is recognised internationally,[6] and has been adopted by the Australian Government.[7] For this issues paper, the OECD definition is the one we have used.

Technologies and techniques of AI

2.6 AI is a broad term referring to a range of computational techniques. We set out a range of technologies within the scope of the review below.

Expert systems

2.7 Expert systems are computer programs that use pre-defined rules (‘if-then rules’) and a knowledge base to infer conclusions. Expert systems were an early example of AI, although not all expert systems are considered AI. Unlike more recent AI technologies, traditional expert systems do not learn and adapt without human intervention.

The HYPE tool is a legal expert system developed through AustLii’s Datalex platform which automatically generates large-scale mark-ups of legal documents.[8]

Machine learning and deep learning

2.8 Machine learning is a ‘set of techniques for creating algorithms so that computational systems can learn from data’.[9] Machine learning models are trained on large volumes of data to find patterns which can be applied to make predictions or decisions. These types of AI can improve their performance, ‘learning’ over time based on experience and feedback about previous responses.

AI algorithms used by social media apps are an example of machine learning. These algorithms filter and recommend content based on what the user chooses to view on platforms such as Facebook, Instagram and YouTube.

2.9 Deep learning is a ‘subset of machine learning that is loosely based on the information-processing architecture of the brain’.[10] This kind of AI uses neural networks to analyse complex patterns and learn ‘rules’ from vast amounts of data. Deep learning is used in many kinds of AI technology including image recognition, face recognition and speech recognition.

Personal voice assistants like Apple’s Siri or Amazon’s Alexa are common examples of how deep learning approaches are applied to voice-to-voice interaction.

Generative AI

2.10 Generative AI is a subset of machine learning that generates new content by recognising patterns and making statistical predictions about the best response to a prompt.[11] A prompt is ‘an instruction, query or command that a user enters into an interface to request a response from the system’.[12] Outputs created by generative AI may not always be the same, even when the same prompt is used. Generative AI models are trained on huge amounts of data. Text, code, images, video and audio are some of the outputs that can be created through generative AI.

2.11 Common among many generative AI-based systems are complex supply arrangements, meaning data and technology moves between multiple parties. Sometimes these parties are based in different jurisdictions. In contrast, older types of AI such as rules-based or expert systems typically require limited movement of data between parties or locations.

2.12 Large language models are a type of generative AI. They use natural language processing to generate language-based outputs, such as text or code. The data used by large language models is dependent upon a developer’s preferences in shaping that system’s inputs and outputs, and often will include rules to exclude certain sources or content.

ChatGPT is an example of a large language model. Legal-specific generative AI systems include Lexis+ AI and Thomson Reuters CoCounsel.[13]

Automation

2.13 Automation refers to the use of technology to undertake tasks or processes with minimal human input. But not all automated systems are considered AI. Automation and AI are overlapping but distinct concepts, with AI enabling more complex tasks to be automated.[14] Low-level automation that does not involve AI is not part of this review, as automation has impacted the legal system over a much longer period of time than AI has.

The Commonwealth Courts Portal and the National Court Framework’s eLodgment are examples of automated systems. They facilitate the centralised lodgement of documents and access by parties to files and hearing dates.[15]

Algorithmic or automated decision-making

2.14 Algorithmic or automated decision-making refers to computational systems which assist or replace decisions by humans.

2.15 Automated decision-making can range from analysis or predictions that support human decisions through to fully automated AI systems which produce final decisions.[16] Some automated decision-making, such as routine rule-based tasks, does not involve AI. Other automated decision-making uses machine learning and other AI technologies.

2.16 Automated decision-making can be represented on a scale of increasing automation and decreasing human involvement. This means an automated system could:

1. Collect and present information for consideration by a decision-maker

2. Filter information to provide a set of cases for a decision-maker to consider

3. Provide guidance through the decision-making process

4. Recommend a course of action

5. Make final decisions.[17]

2.17 The issues and risks of automated decision-making and AI are inter-related. They include bias, privacy, procedural fairness, transparency, security and contestability.

Mobile detection cameras used in New South Wales are supported by AI to provide automated decision-making. The AI system filters images that show potential illegal mobile phone use. An authorised officer then reviews the filtered images and decides whether to issue a penalty notice with the information supplied.[18]

2.18 The figure below illustrates the interaction of the key terms we have now covered. The figure highlights that:

• Machine learning is a subset of AI, and generative AI is a subset of machine learning

• Expert systems can involve AI, but often do not

• Automated decision-making intersects with all types of AI, as well as systems that are not AI.

Figure 2: Interactions of AI technologies

Key attributes of AI

How is AI different to traditional computer systems?

2.19 AI systems differ from traditional computer systems and automation. This is important as we consider benefits and risks. The Australian Government’s recent proposals paper, Safe and Responsible AI, identifies key differences between AI and other types of software:[19]

• Autonomy: AI systems can make decisions autonomously, without human intervention at any stage of the decision-making process.

• General cognitive capabilities: General purpose systems like large language models can exhibit behaviour that, in humans, would require general cognitive capabilities, as opposed to specific capabilities to solve a task.

• Adaptability and learning: AI systems can adapt and improve their performance over time by learning from data.

• Speed and scale: AI has an unparalleled capacity to analyse massive amounts of data in a highly efficient and scalable way. It also allows for real-time decision-making and distribution of outputs at a very large scale.

• Opacity or lack of explainability (the ‘black box’ problem): The most advanced AI models are trained on data that is often too vast and too complex for humans to efficiently process, and which may not have been curated or documented prior to ingestion. Techniques used to reason from data are multi-layered and under-studied, contributing to a limited understanding of their outputs. Decisions that AI systems make are not always traceable.

• High realism: AI has reached a point where it can emulate human-like behaviours. It can be difficult to distinguish AI interactions or outputs from human interactions or outputs.

• Versatility: AI models are a multipurpose technology that can perform tasks beyond those intended by their developers.

• Ubiquity: AI, particularly generative AI, is an increasing part of our everyday lives and continues to be developed and adopted rapidly.

2.20 AI poses different risks when compared to traditional software because:[20]

• the system is trained on large amounts of data that may not be true, representative or appropriate for the intended use

• systems rely on complex and distributed operational supply chains and data flows

• there may be biases built into the system

• it may provide different outputs for the exact same prompt

• it can be difficult to predict or detect adverse impacts

• the system needs to be maintained to remain accurate and up to date

• the underlying technology and how it operates can be opaque

• it can be difficult to define and allocate accountability across the AI lifecycle

• testing and validation of AI systems is a difficult task in pre- and post-market contexts, compared to traditional software.[21]

Open and closed AI systems

2.21 AI technology is often described as ‘open’ or ‘closed’. This can refer to two different concepts:

• open and closed source refers to whether access to the underlying model is freely available

• open and closed domain refers to the type of data used by that AI system.

2.22 These concepts of ‘open’ and closed’ are not necessarily binary and can be considered on a spectrum.[22]

Open and closed source

2.23 AI models are called ‘open source’ when the underlying architecture is freely available and publicly accessible. Developers, researchers and administrators can use the source code, collaborate and modify programs subject to any applicable open-source licenses.[23]

2.24 ‘Closed source’ models do not provide access to the underlying architecture.[24] Intellectual property laws may protect closed source models, making it difficult to understand the underlying technology or how outputs are generated. This ‘opacity’ can impact both institutional independence and judicial impartiality in the context of courts and tribunals.[25]

2.25 There are merits in using either open- or closed-source AI. Open source models provide greater transparency and can assist in making an AI system more customised. The organisation implementing the AI system can access the underlying architecture to modify the system to perform specific tasks. This can also mean more community and research collaboration and faster development and innovation.[26] However, risks include concerns about data privacy, regulatory challenges where a model has decentralised control and widespread distribution, and vulnerability to security breaches. Individuals can apply open AI models to produce harmful content or breach confidentiality requirements.[27]

ChatGPT4 is an example of a closed source large language model which does not disclose the system architecture, hardware or training data, amongst other things.[28]

Llama 3.1 is an open source large language model that can be downloaded freely, but then potentially trained on closed domain data.[29] Open Justice was one of the first global open source large language models designed for the administration of law.[30]

Open and closed domain

2.26 ‘Open domain’ AI refers to a large language model that is trained on a broad range of data or inputs, and can respond to a wide range of topics. For example, ChatGPT draws on publicly available material from the internet.[31]

2.27 ‘Closed domain’ AI refers to a large language model that is trained on a defined set of material, often subject specific. This distinction is important for retrieval-augmented generation, a training technique that draws on a specified domain of material and anchors responses in that material. This has been shown to improve the reliability of responses and reduce the likelihood of error.[32]

Examples of closed domain AI that use retrieval-augmented generation are Lexis Nexis’s AI Legal Assistant and Thompson Reuters CoCounsel (discussed further below).[33]

General purpose AI

2.28 ‘General purpose AI’ refers to AI models which are trained on a large amount of data and can undertake a wide array of tasks.[34] Earlier AI technology was often only competent at a discrete task or application.

2.29 General purpose AI has been adopted as a regulatory category by the European Union in the Artificial Intelligence Act (EU AI Act) and is defined to mean an AI model that:

displays a significant generality and is capable of performing a wide range of distinct tasks regardless of the way the model is placed on the market that can be integrated into a variety of downstream systems or applications.[35]

2.30 Foundation models are often described as general purpose AI. They are trained on large datasets which can be applied to a wide range of applications.[36] Users can download open source foundation models and then adapt them for particular uses.[37]

Artificial general intelligence

2.31 Artificial general intelligence or ‘general AI’ refers to AI systems with cognitive abilities similar to human beings.[38] It is beyond the scope of this review because it remains theoretical. Its likely impact is hard to define without real applications.[39]

2.32 The lack of cognitive ability in current AI can help us to think about the ethical and appropriate use of AI in courts and tribunals.[40] Some AI systems and models can simulate legal reasoning, but none can exercise reason.[41] AI technology based on deep learning may be patterned on the human brain, but it produces outputs based on statistical probability.[42]

2.33 When AI creates an output it is not expressing an opinion. Understanding the moral consequence of a decision, including exercising empathy, compassion, discretion or mercy, remains beyond the capability of any current AI technology.[43]

Questions

1. Should courts and tribunals adopt a definition of AI? If so, what definition?

2. Are there specific AI technologies that should be considered within or out of the scope of this review?


  1. See for examples Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Report, 7 December 2023); County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) <https://www.countycourt.vic.gov.au/practice-notes>; Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; James E Baker, Laurie Hobart N and Matthew Mittelsteadt, An Introduction to Artificial Intelligence for Federal Judges (Report, Federal Judicial Centre, 2023) 5 citing National Security Commission on Artificial Intelligence (NSCAI), Interim Report 8 (Nov. 2019), https://www.nscai.gov/wp-content/uploads/2021/01/NSCAI-Interim-Report-forCongress_201911.pdf.

  2. The International Organization for Standardization defines an AI system as ‘an engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human defined objectives’ see International Organization for Standardization (ISO), ‘What Is Artificial Intelligence (AI)?’, ISO (Web Page) <https://www.iso.org/cms/render/live/en/sites/isoorg/contents/news/insights/AI/what-is-ai-all-you-need-to-know.evergreen.html> citing; ISO, ‘ISO/IEC 22989:2022 Information Technology — Artificial Intelligence — Artificial Intelligence Concepts and Terminology’, ISO (Web Page)

    <https://www.iso.org/standard/74296.html>.

  3. Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.

  4. National Institute of Standards and Technology (U.S.), Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Report No NIST AI 100-1, 26 January 2023) 1 <http://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf> Defines an AI system as ‘an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy’.

  5. 5 OECD, Recommendation of the Council on Artificial Intelligence (Report No OECD/LEGAL/0449, 2024).

  6. The OECD is an authoritative international body that plays a leading role in setting international standards. It has 38 member states including Australia, Canada, New Zealand, Germany, Japan, UK, USA, France and Denmark. ‘Members and Partners’, OECD (Web Page) <https://www.oecd.org/en/about/members-partners.html>.

  7. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 8.

  8. Graham Greenleaf, Andrew Mowbray and Philip Chung, ‘Building Sustainable Free Legal Advisory Systems: Experiences from the History of AI & Law’ (2018) 34(1) Computer Law & Security Review 1, 5.

  9. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 4 <https://apo.org.au/node/327400>.

  10. ASEAN Guide on AI Governance and Ethics (Report, ASEAN: Association of South East Asian Nations, February 2024) 9

    <https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf>.

  11. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 5 <https://apo.org.au/node/327400>.

  12. 12 Ibid 2.

  13. 13 ‘CoCounsel. Focus on the Work That Matters with a Trusted Gen AI Assistant’, Thomson Reuters Australia (Web Page)

    <https://www.thomsonreuters.com.au/en-au/products/cocounsel.html>; ‘Introducing Protégé Your Personalized Legal AI Assistant’, Lexis+ AI (Web Page) <https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page#top>.

  14. 14 Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 9.

  15. 15 ‘eLodgement’, Federal Court of Australia (Web Page, August 2024) <https://www.fedcourt.gov.au/online-services/elodgment>.

  16. 16 Kimberlee Weatherall et al, Automated Decision-Making in New South Wales: Mapping and Analysis of the Use of ADM Systems by State and Local Governments (Research Report, ARC Centre of Excellence on Automated Decision-Making and Society (ADM+S), March 2024) 13 <https://apo.org.au/node/325901>.

  17. 17 Ibid.

  18. 18 NSW Ombudsman, The New Machinery of Government: Using Machine Technology in Administrative Decision-Making (Report,

    29 November 2021) 16.

  19. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 11.

  20. National Institute of Standards and Technology (U.S.), Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Report No NIST AI 100-1, 26 January 2023) 38 <http://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf>.

  21. Ibid.

  22. David Gray Widder, Sarah West and Meredith Whittaker, ‘Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI’ (SSRN, 18 August 2023) 2 <https://www.ssrn.com/abstract=4543807>.”plainCitation”:”David Gray Widder, Sarah West and Meredith Whittaker, ‘Open (For Business

  23. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 10–11 <https://apo.org.au/node/327400>.

  24. Ibid.

  25. Monika Zalnieriute, Technology and the Courts: Artificial Intelligence and Judicial Impartiality (Submission No. 3 to Australian Law Reform Commission Review of Judicial Impartiality, June 2021) 4 <https://www.alrc.gov.au/wp-content/uploads/2021/06/3-.-Monika-Zalnieriute-Public.pdf>.

  26. Dominik Hintersdorf, Lukas Struppek and Kristian Kersting, ‘Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models’ (arXiv, 18 August 2023) 6 <http://arxiv.org/abs/2308.09490>.

  27. Ibid 5–6.

  28. OpenAI et al, ‘GPT-4 Technical Report’ (No arXiv:2303.08774, arXiv, 4 March 2024) <http://arxiv.org/abs/2303.08774>; Chloe Xiang, ‘OpenAI’s GPT-4 Is Closed Source and Shrouded in Secrecy’, VICE (online, 16 March 2023) <https://www.vice.com/en/article/openais-gpt-4-is-closed-source-and-shrouded-in-secrecy/>; Chloe Xiang, ‘OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit’, VICE (online, 28 February 2023) <https://www.vice.com/en/article/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit/>.

  29. ‘Meet Llama 3.1’, Meta (Web Page) <https://llama.meta.com/>.

  30. Conflict Analytics Lab, OpenJustice (Web Page) <https://openjustice.ai/>; Samuel Dahan et al, ‘OpenJustice.Ai: A Global Open-Source Legal Language Model’ in Volume 379: Legal Knowledge and Information Systems, JURIX 2023: The Thirty-Sixth Annual Conference, Maastricht, the Netherlands, 18–20 December 2023 (IOS Press, 2023) 387.

  31. ‘ChatGPT’, OpenAI (Web Page, 2024) <https://openai.com/chatgpt/>.

  32. Varun Magesh et al, ‘Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools’ (arXiv, 2024)

    <https://arxiv.org/abs/2405.20362>.

  33. ‘CoCounsel Drafting – Save Valuable Time with CoCounsel Drafting’, Thomson Reuters (Web Page) <https://legal.thomsonreuters.com/en/products/cocounsel-drafting>; ‘LexisNexis Launches Second-Generation Legal AI Assistant on Lexis+ AI’, LexisNexis (Web Page, 23 April 2024) <https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-launches-second-generation-legal-ai-assistant-on-lexis-ai>.

  34. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 6 <https://apo.org.au/node/327400>.

  35. Regulation (EU) 2024/1689 (Artificial Intelligence Act) [2024] OJ L 2024/1689, Ch I Art 3(63).

  36. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 7 <https://apo.org.au/node/327400>; G Bell, J Burgess and S Sadiq, Generative AI: Language Models and Multimodal Foundation Models (Rapid Response Information Report, Australian Council of Learned Academies, 24 March 2023) 27.

  37. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 7 <https://apo.org.au/node/327400>.

  38. International Organization for Standardization (ISO), ‘What Is Artificial Intelligence (AI)?’, ISO (Web Page) <https://www.iso.org/cms/render/live/en/sites/isoorg/contents/news/insights/AI/what-is-ai-all-you-need-to-know.evergreen.html> This distinction is sometimes characterised as the difference between ‘weak AI’ and ‘strong AI’. Artificial General Intelligence or general AI is also often distinguished from ‘narrow AI’. See the Glossary for further explanation.

  39. Ibid.

  40. Many legal experts have explored how the technical limitations of AI shape their views over its most appropriate applications. These include Tania Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-Making’ (2018) 41(4) University of New South Wales Law Journal 1114, 1122–24; Monika Zalnieriute and Felicity Bell, ‘Technology and the Judicial Role’ in Gabrielle Appleby and Andrew Lynch (eds), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia (Cambridge University Press, 2021) 116, 138-141; John Zeleznikow, ‘The Benefits and Dangers of Using Machine Learning to Support Making Legal Predictions’ (2023) 13(4) WIREs Data Mining and Knowledge Discovery e1505, 16–17; Simon Chesterman, Lyria Bennett Moses and Ugo Pagallo, ‘All Rise for the Honourable Robot Judge? Using Artificial Intelligence to Regulate AI’ [2023] Technology and Regulation 45, 47–48, 55; John Morison and Tomás Mclnerney, ‘When Should a Computer Decide? Judicial Decision-Making in the Age of Automation, Algorithms and Generative Artificial Intelligence’ in S Turenne and M Moussa (eds), Research Handbook on Judging and the Judiciary (Edward Elgar-Routledge, 2024) 28–34; Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 209–35.

  41. For example, this argument is explored for Large Language Models in John Morison and Tomás Mclnerney, ‘When Should a Computer Decide? Judicial Decision-Making in the Age of Automation, Algorithms and Generative Artificial Intelligence’ in S Turenne and M Moussa (eds), Research Handbook on Judging and the Judiciary (Edward Elgar-Routledge, 2024) 6.

  42. Yannick Meneceur and Clementina Barbaro, ‘Artificial Intelligence and the Judicial Memory: The Great Misunderstanding’ (2022) 2 AI and Ethics 269.

  43. For further consideration of the unique human qualities embedded in judicial discretion see Justice Melissa Perry, AI and Automated Decision-Making: Are You Just Another Number? (Speech, Kerr’s Vision Splendid for Administrative Law: Still Fit for Purpose? – Online Symposium on the 50th Anniversary of the Kerr Report, Gilbert + Tobin Centre of Public Law, UNSW Law & Justice NSW Chapter, Australian Institute of Administrative Law, 4 November 2021) <https://www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20211021#_ftnref11>.